11 |
KL-cut based remapping / Remapeamento baseado em cortes KLMachado, Lucas January 2013 (has links)
Este trabalho introduz o conceito de cortes k e cortes kl sobre um circuito mapeado, em uma representação netlist. Esta nova abordagem é derivada do conceito de cortes k e cortes kl sobre AIGs (and inverter graphs), respeitando as diferenças entre essas duas formas de representar um circuito. As principais diferenças são: (1) o número de entradas em um nodo do grafo, e (2) a presença de inversores e buffers de forma explícita no circuito mapeado. Um algoritmo para enumerar cortes k e cortes kl é proposto e implementado. A principal motivação de usar cortes kl sobre circuitos mapeados é para realizar otimizações locais na síntese lógica de circuitos digitais. A principal contribuição deste trabalho é uma abordagem nova de remapeamento iterativo, utilizando cortes kl, reduzindo a área do circuito e respeitando as restrições de temporização do circuito. O uso de portas lógicas complexas pode potencialmente reduzir a área total de um circuito, mas elas precisam ser escolhidas corretamente de forma a manter as restrições de temporização do circuito. Ferramentas comerciais de síntese lógica trabalham melhor com portas lógicas simples e não são capazes de explorar eventuais vantagens em utilizar portas lógicas complexas. A abordagem proposta de remapeamento iterativo utilizando cortes kl é capaz de explorar uma quantidade maior de portas lógicas com funções lógicas diferentes, reduzindo a área do circuito, e mantendo as restrições de temporização intactas ao fazer uma checagem STA (análise temporal estática). Resultados experimentais mostram uma redução de até 38% de área na parte combinacional de circuitos para um subconjunto de benchmarks IWLS 2005, quando comparados aos resultados de ferramentas comerciais de síntese lógica. Outra contribuição deste trabalho é um novo modelo de rendimento (yield) para fabricação de circuitos integrados (IC) digitais, considerando problemas de resolução da etapa de litografia como uma fonte de diminuição do yield. O uso de leiautes regulares pode melhorar bastante a resolução da etapa de litografia, mas existe um aumento de área significativo ao se introduzir a regularidade. Esta é a primeira abordagem que considera o compromisso (trade off) de portas lógicas com diferentes níveis de regularidade e diferentes áreas durante a síntese lógica, de forma a melhorar o yield do projeto. A ferramenta desenvolvida de remapeamento tecnológico utilizando cortes kl foi modificada de forma a utilizar esse modelo de yield como função custo, de forma a aumentar o número de boas amostras (dies) por lâmina de silício (wafer), com resultados promissores. / This work introduces the concept of k-cuts and kl-cuts on top of a mapped circuit in a netlist representation. Such new approach is derived from the concept of k-cuts and klcuts on top of AIGs (and inverter graphs), respecting the differences between these two circuit representations. The main differences are: (1) the number of allowed inputs for a logic node, and (2) the presence of explicit inverters and buffers in the netlist. Algorithms for enumerating k-cuts and kl-cuts on top of a mapped circuit are proposed and implemented. The main motivation to use kl-cuts on top mapped circuits is to perform local optimization in digital circuit logic synthesis. The main contribution of this work is a novel iterative remapping approach using klcuts, reducing area while keeping the timing constraints attained. The use of complex gates can potentially reduce the circuit area, but they have to be chosen wisely to preserve timing constraints. Logic synthesis commercial design tools work better with simple cells and are not capable of taking full advantage of complex cells. The proposed iterative remapping approach can exploit a larger amount of logic gates, reducing circuit area, and respecting global timing constraints by performing an STA (static timing analysis) check. Experimental results show that this approach is able to reduce up to 38% in area of the combinational portion of circuits for a subset of IWLS 2005 benchmarks, when compared to results obtained from logic synthesis commercial tools. Another contribution of this work is a novel yield model for digital integrated circuits (IC) manufacturing, considering lithography printability problems as a source of yield loss. The use of regular layouts can improve the lithography, but it results in a significant area overhead by introducing regularity. This is the first approach that considers the tradeoff of cells with different level of regularity and different area overhead during the logic synthesis, in order to improve overall design yield. The technology remapping tool based on kl-cuts developed was modified in order to use such yield model as cost function, improving the number of good dies per wafer, with promising interesting results.
|
12 |
KL-cut based remapping / Remapeamento baseado em cortes KLMachado, Lucas January 2013 (has links)
Este trabalho introduz o conceito de cortes k e cortes kl sobre um circuito mapeado, em uma representação netlist. Esta nova abordagem é derivada do conceito de cortes k e cortes kl sobre AIGs (and inverter graphs), respeitando as diferenças entre essas duas formas de representar um circuito. As principais diferenças são: (1) o número de entradas em um nodo do grafo, e (2) a presença de inversores e buffers de forma explícita no circuito mapeado. Um algoritmo para enumerar cortes k e cortes kl é proposto e implementado. A principal motivação de usar cortes kl sobre circuitos mapeados é para realizar otimizações locais na síntese lógica de circuitos digitais. A principal contribuição deste trabalho é uma abordagem nova de remapeamento iterativo, utilizando cortes kl, reduzindo a área do circuito e respeitando as restrições de temporização do circuito. O uso de portas lógicas complexas pode potencialmente reduzir a área total de um circuito, mas elas precisam ser escolhidas corretamente de forma a manter as restrições de temporização do circuito. Ferramentas comerciais de síntese lógica trabalham melhor com portas lógicas simples e não são capazes de explorar eventuais vantagens em utilizar portas lógicas complexas. A abordagem proposta de remapeamento iterativo utilizando cortes kl é capaz de explorar uma quantidade maior de portas lógicas com funções lógicas diferentes, reduzindo a área do circuito, e mantendo as restrições de temporização intactas ao fazer uma checagem STA (análise temporal estática). Resultados experimentais mostram uma redução de até 38% de área na parte combinacional de circuitos para um subconjunto de benchmarks IWLS 2005, quando comparados aos resultados de ferramentas comerciais de síntese lógica. Outra contribuição deste trabalho é um novo modelo de rendimento (yield) para fabricação de circuitos integrados (IC) digitais, considerando problemas de resolução da etapa de litografia como uma fonte de diminuição do yield. O uso de leiautes regulares pode melhorar bastante a resolução da etapa de litografia, mas existe um aumento de área significativo ao se introduzir a regularidade. Esta é a primeira abordagem que considera o compromisso (trade off) de portas lógicas com diferentes níveis de regularidade e diferentes áreas durante a síntese lógica, de forma a melhorar o yield do projeto. A ferramenta desenvolvida de remapeamento tecnológico utilizando cortes kl foi modificada de forma a utilizar esse modelo de yield como função custo, de forma a aumentar o número de boas amostras (dies) por lâmina de silício (wafer), com resultados promissores. / This work introduces the concept of k-cuts and kl-cuts on top of a mapped circuit in a netlist representation. Such new approach is derived from the concept of k-cuts and klcuts on top of AIGs (and inverter graphs), respecting the differences between these two circuit representations. The main differences are: (1) the number of allowed inputs for a logic node, and (2) the presence of explicit inverters and buffers in the netlist. Algorithms for enumerating k-cuts and kl-cuts on top of a mapped circuit are proposed and implemented. The main motivation to use kl-cuts on top mapped circuits is to perform local optimization in digital circuit logic synthesis. The main contribution of this work is a novel iterative remapping approach using klcuts, reducing area while keeping the timing constraints attained. The use of complex gates can potentially reduce the circuit area, but they have to be chosen wisely to preserve timing constraints. Logic synthesis commercial design tools work better with simple cells and are not capable of taking full advantage of complex cells. The proposed iterative remapping approach can exploit a larger amount of logic gates, reducing circuit area, and respecting global timing constraints by performing an STA (static timing analysis) check. Experimental results show that this approach is able to reduce up to 38% in area of the combinational portion of circuits for a subset of IWLS 2005 benchmarks, when compared to results obtained from logic synthesis commercial tools. Another contribution of this work is a novel yield model for digital integrated circuits (IC) manufacturing, considering lithography printability problems as a source of yield loss. The use of regular layouts can improve the lithography, but it results in a significant area overhead by introducing regularity. This is the first approach that considers the tradeoff of cells with different level of regularity and different area overhead during the logic synthesis, in order to improve overall design yield. The technology remapping tool based on kl-cuts developed was modified in order to use such yield model as cost function, improving the number of good dies per wafer, with promising interesting results.
|
13 |
KL-cut based remapping / Remapeamento baseado em cortes KLMachado, Lucas January 2013 (has links)
Este trabalho introduz o conceito de cortes k e cortes kl sobre um circuito mapeado, em uma representação netlist. Esta nova abordagem é derivada do conceito de cortes k e cortes kl sobre AIGs (and inverter graphs), respeitando as diferenças entre essas duas formas de representar um circuito. As principais diferenças são: (1) o número de entradas em um nodo do grafo, e (2) a presença de inversores e buffers de forma explícita no circuito mapeado. Um algoritmo para enumerar cortes k e cortes kl é proposto e implementado. A principal motivação de usar cortes kl sobre circuitos mapeados é para realizar otimizações locais na síntese lógica de circuitos digitais. A principal contribuição deste trabalho é uma abordagem nova de remapeamento iterativo, utilizando cortes kl, reduzindo a área do circuito e respeitando as restrições de temporização do circuito. O uso de portas lógicas complexas pode potencialmente reduzir a área total de um circuito, mas elas precisam ser escolhidas corretamente de forma a manter as restrições de temporização do circuito. Ferramentas comerciais de síntese lógica trabalham melhor com portas lógicas simples e não são capazes de explorar eventuais vantagens em utilizar portas lógicas complexas. A abordagem proposta de remapeamento iterativo utilizando cortes kl é capaz de explorar uma quantidade maior de portas lógicas com funções lógicas diferentes, reduzindo a área do circuito, e mantendo as restrições de temporização intactas ao fazer uma checagem STA (análise temporal estática). Resultados experimentais mostram uma redução de até 38% de área na parte combinacional de circuitos para um subconjunto de benchmarks IWLS 2005, quando comparados aos resultados de ferramentas comerciais de síntese lógica. Outra contribuição deste trabalho é um novo modelo de rendimento (yield) para fabricação de circuitos integrados (IC) digitais, considerando problemas de resolução da etapa de litografia como uma fonte de diminuição do yield. O uso de leiautes regulares pode melhorar bastante a resolução da etapa de litografia, mas existe um aumento de área significativo ao se introduzir a regularidade. Esta é a primeira abordagem que considera o compromisso (trade off) de portas lógicas com diferentes níveis de regularidade e diferentes áreas durante a síntese lógica, de forma a melhorar o yield do projeto. A ferramenta desenvolvida de remapeamento tecnológico utilizando cortes kl foi modificada de forma a utilizar esse modelo de yield como função custo, de forma a aumentar o número de boas amostras (dies) por lâmina de silício (wafer), com resultados promissores. / This work introduces the concept of k-cuts and kl-cuts on top of a mapped circuit in a netlist representation. Such new approach is derived from the concept of k-cuts and klcuts on top of AIGs (and inverter graphs), respecting the differences between these two circuit representations. The main differences are: (1) the number of allowed inputs for a logic node, and (2) the presence of explicit inverters and buffers in the netlist. Algorithms for enumerating k-cuts and kl-cuts on top of a mapped circuit are proposed and implemented. The main motivation to use kl-cuts on top mapped circuits is to perform local optimization in digital circuit logic synthesis. The main contribution of this work is a novel iterative remapping approach using klcuts, reducing area while keeping the timing constraints attained. The use of complex gates can potentially reduce the circuit area, but they have to be chosen wisely to preserve timing constraints. Logic synthesis commercial design tools work better with simple cells and are not capable of taking full advantage of complex cells. The proposed iterative remapping approach can exploit a larger amount of logic gates, reducing circuit area, and respecting global timing constraints by performing an STA (static timing analysis) check. Experimental results show that this approach is able to reduce up to 38% in area of the combinational portion of circuits for a subset of IWLS 2005 benchmarks, when compared to results obtained from logic synthesis commercial tools. Another contribution of this work is a novel yield model for digital integrated circuits (IC) manufacturing, considering lithography printability problems as a source of yield loss. The use of regular layouts can improve the lithography, but it results in a significant area overhead by introducing regularity. This is the first approach that considers the tradeoff of cells with different level of regularity and different area overhead during the logic synthesis, in order to improve overall design yield. The technology remapping tool based on kl-cuts developed was modified in order to use such yield model as cost function, improving the number of good dies per wafer, with promising interesting results.
|
14 |
FPGA implementation of an undistortion model with high parameter flexibility and DRAM-free operation / FPGA-implementering av en oförvrängd modell med hög parametervariabilitet och DRAM-fri funktion.McCormick, Zacharie January 2023 (has links)
Computer Vision (CV) has become omnipresent in our everyday life and it’s starting to see more and more use in the industry. This movement creates a demand for ever more performant systems to keep up with the increasing demands in manufacturing speed and autonomous behaviours. Such computer vision (CV) systems need to run complex algorithms at real-time speed and sometime even in energy constrained systems. Thus efficient implementation of these algorithms are a must. One of those algorithm is the lens rectification algorithm (also sometime called undistortion algorithm) that is often one of the first algorithm to be used to correct for multiple imperfection that can occur in a camera and lens system. This algorithm has been implemented on FieldProgramable Gate Arrays (FPGAs) in past work but they either relied heavily on Dynamic Random Access Memory (DRAM) or used a subset of the full lens distortion model used by OpenCV and restricted themselves to small distortion amounts by having access to only parts of the image at a time. This thesis aims to create an open-source, DRAM-free FPGA implementation of the OpenCV lens rectification model of this algorithm, using the full 12-parameter model and allowing the use of any parameter which, to our knowledge, has not yet been implemented. To do so, a hybrid programming approach was taken meaning that both Hardware Descriptive Languages and High-Level Synthesis were used to arrive at the final implementation. The final implementation achieves 1300 frames per second with a sub-millisecond latency at a resolution of 320x240 on grayscale images. / Databearbetning (CV) har blivit allestädes närvarande i vårt vardagliga liv och det börjar se mer och mer användning inom industrin. Denna rörelse skapar ett efterfrågan på allt mer prestandastarka system för att hålla jämna steg med den ökande efterfrågan på tillverkningshastighet och autonoma beteenden. Sådana databearbetningssystem (CV) måste köra komplexa algoritmer i realtid och ibland även i energibegränsade system. Effektiva implementeringar av dessa algoritmer är därför ett måste. En av dessa algoritmer är linsrätningsalgoritmen (ibland även kallad oavvändningsalgoritm) som ofta är en av de första algoritmerna som används för att korrigera för flera fel som kan uppstå i ett kamerasystem och linssystem. Denna algoritm har implementerats på FieldProgramable Gate Arrays (FPGAs) i andra dokument tidigare, men de har antingen starkt beroende på Dynamic Random Access Memory (DRAM) eller använt en delmängd av den fullständiga linsdistortionmodellen som används av OpenCV och begränsat sig till små distortioner genom att ha tillgång till endast delar av bilden åt gången. Denna artikel syftar till att skapa en öppen källkod, DRAM-fri FPGA-implementering av OpenCV-linsrätningsmodellen av denna algoritm, med hjälp av den fullständiga 12-parametermodellen och tillåter användning av vilken parameter som helst, vilket inte har gjorts tidigare. För att göra detta togs ett hybridprogrammeringsansats, vilket innebär att både hårdvarubeskrivningsspråk och högnivåsyntes användes för att nå den slutliga implementeringen. Den slutliga implementeringen uppnår 1300 bilder per sekund med en sub-millisekund latens vid en upplösning på 320x240 på gråskalabilder.
|
15 |
Data and Processor Mapping Strategies for Dynamically Resizable Parallel ApplicationsChinnusamy, Malarvizhi 18 August 2004 (has links)
Due to the unpredictability in job arrival times in clusters and widely varying resource requirements, dynamic scheduling of parallel computing resources is necessary to increase system throughput. Dynamically resizable applications provide the flexibility needed for dynamic scheduling. These applications can expand to take advantage of additional free processors, or to meet a Quality of Service (QoS) deadline, or can shrink to accommodate a high priority application, without getting suspended.
This thesis is part of a larger effort to define a framework for dynamically resizable parallel applications. This framework includes a scheduler that supports resizing applications, an API to enable applications to interact with the scheduler, and libraries that make resizing viable. This thesis focuses on libraries for efficient resizing of parallel applications—efficient in terms of minimizing the cost of data redistribution, choosing and allocating the right set of additional processors, and focusing on the performance of the application after resizing. We explore the tradeoffs between these goals on both homogeneous and heterogeneous clusters. We focus on structured applications that have 2D data arrays distributed across a 2D processor grid.
Our library includes algorithms for processor selection and processor mapping. For homogeneous clusters, processor selection involves selecting the number of processors that needs to be added and processor mapping decides the placement of the new processors in the context of the given topology such that it minimizes the amount of data that is to be redistributed. For heterogeneous clusters, since the processing powers of the processors vary, there is also an additional problem of choosing the right set of processors that needs to be added. We also present results that demonstrate the effectiveness of our approach. / Master of Science
|
16 |
Διερεύνηση και εφαρμογή της τεχνικής “Μεταβλητός αστερισμός συμβόλων” (Constellation Remapping) σε επικοινωνίες πολλαπλών κεραιών (ΜΙΜΟ)Μπλάτσας, Μιλτιάδης 31 August 2012 (has links)
Οι επικοινωνίες με χρήση πολλαπλών κεραιών αποτελούν μια πολλά υποσχόμενη τεχνολογία προκειμένου να αντιμετωπιστεί αποδοτικά το φαινόμενο της τυχαίας εξασθένησης που παρουσιάζει ο ασύρματος τηλεπικοινωνιακός δίαυλος (κανάλι). Τα συστήματα επικοινωνίας στα οποία τόσο ο δέκτης όσο και ο πομπός είναι εξοπλισμένοι με περισσότερες της μιας κεραίας ορίζουν ένα σύστημα πολλαπλών εισόδων πολλαπλών εξόδων (Multiple Input Multiple Output, MIMO) στο οποίο η πιθανότητα όλα τα κανάλια που ορίζονται να βρίσκονται σε εξασθένηση είναι σαφώς μικρότερη σε σχέση με τα συστήματα που χρησιμοποιούν μόνο μια κεραία (χωρική διαφορετικότητα, spatial diversity). Συνήθως, σε ένα σύστημα MIMO, ο πομπός εκτελεί μια επεξεργασία γνωστή με το όνομα “χώρο-χρονική κωδικοποίηση” (space-time coding) προκειμένου να βελτιώσει την απόδοση του συστήματος σε σχέση με την πιθανότητα λανθασμένης μετάδοσης. Πρόσφατα, (1) προτάθηκε μια νέα τεχνική βελτίωσης της πιθανότητας εσφαλμένης μετάδοσης σε ένα σύστημα το οποίο χρησιμοποιεί την τεχνική της επαναμετάδοσης (retransmission). Η τεχνική αυτή έγκειται στην τροποποίηση του κανόνα με βάση τον οποίο τα δυαδικά προς μετάδοση δεδομένα απεικονίζονται σε σύμβολα, σε κάθε επαναμετάδοση. Αντικείμενο της εργασίας αποτελεί η εφαρμογή της προαναφερθείσας τεχνικής σε ένα σύστημα MIMO όπου ο πομπός του συστήματος θα χρησιμοποιεί έναν διαφορετικό κανόνα απεικόνισης των δυαδικών δεδομένων σε σύμβολα σε κάθε έναν από τους κλάδους των κεραιών εκπομπής. Επιμέρους στόχοι της εργασίας είναι:
1. Η σύγκριση της τεχνικής με διάφορες γνωστές τεχνικές χωρο-χρονικής κωδικοποίησης.
2. Ο συνδυασμός της τεχνικής με τεχνικές χωρο-χρονικής κωδικοποίησης.
3. Η διερεύνηση της επιπλέον πολυπλοκότητας που απαιτείται στο δέκτη λόγω της χρήσης της τεχνικής αυτής, ειδικά στην περίπτωση όπου το εμπλεκόμενο MIMO κανάλι είναι συχνοτικά επιλεκτικό και απαιτείται η χρήση κάποιου ισοσταθμιστή. / In this project, we present a simple, but effective method of enhancing and exploiting diversity from multiple packet transmissions in systems that employ nonbinary linear modulations such as phase-shift keying (PSK) and quadrature amplitude modulation (QAM). This diversity improvement results from redesigning the symbol mapping for each packet transmission. By developing a general framework for evaluating the upper bound of the bit error rate (BER) with multiple transmissions, a criterion to obtain optimal symbol mappings is attained. The optimal adaptation scheme reduces to solutions of the well known quadratic assignment problem (QAP). Symbol mapping adaptation only requires a small increase in receiver complexity but provides very substantial BER gains when applied to additive white Gaussian noise (AWGN) and flat-fading channels.
My own idea in this research was the combination of this Constellation Remapping technique with the Alamouti code. The results that extracted from this combination shows that this method leads to lower BER compared with the corresponding BER of the Alamouti code (only) and the corresponding BER of the conventional Constellation Remapping technique.
|
17 |
Haut contraste par réarrangement de pupille pour la détection d'exoplanètes / High contrast using pupil remapping for exoplanetary detectionGauchet, Lucien 01 December 2017 (has links)
La détection des exoplanètes et de l’environnement d’étoiles jeunes tel que les disques de débris fait face à deux difficultés majeures : d’une part, la faible distance angulaire entre le compagnon (ou le disque) et son étoile hôte, et d’autre part, le contraste élevé entre les deux composantes en terme de flux. L’interférométrie est une des techniques permettant de palier ces deux problématiques en apportant une détection à la fois à haute résolution angulaire et à haute dynamique. C’est tout particulièrement le cas dans la mise en oeuvre de l’interférométrie annulante, aussi appelée interférométrie en frange noire, lors de laquelle on vient éteindre le flux de l’étoile principale grâce aux propriétés de cohérence de la lumière.On recombine la lumière issue de deux télescopes ou plus de sorte à faire interférer destructivement les photons provenant de l’étoile principale et constructivement pour les photons venant du compagnon ou du disque environnant. Mon travail de thèse s’inscrit dans ce cadre, avec l’étude de données observationnelles de huit disques de débris réalisées au Very Large Telescope, grâce à la technique interférométrique de masquage de pupille. Lors de cette étude j’ai réalisé la réduction des données interférométriques et une analyse des termes de clôtures de phase. Puisque aucun compagnon n’a été détecté dans les données, j’en ai déduit les limites de détection en termes de luminosité et de masse estimée à l’aide d’isochrones issue de modèle d’évolution.Ma thèse à également consisté en une composante expérimentale, avec la conception et l’étalonnage de l’instrument FIRST-IR (Fibered Imager foR a Single Telescope InfraRed) en laboratoire, à l’Observatoire de Meudon. Cet instrument est un interféromètre qui associe la technique de réarrangement de pupille fibré et la recombinaison de la lumière paroptique intégrée. Le type d’optique intégrée étudié ici est un composant optique planaire sur lequel des guides d’ondes ont été gravés. l’optique intégrée est de type nuller et prend en entrée le flux de quatre sous-pupilles. Les guides sont agencés selon une certaine architecture permettant de réaliser dans un premier temps une fonction annulante sur trois bases puis une mesure des franges d’interférence sur les trois voies annulées. J’ai réalisé un étalonnage complet de cette optique intégrée ainsi que des mesures de clôture de phase.En conclusion, j’ai pu montrer la viabilité de l’instrument FIRST-IR avec ce nouveau concept d’optique intégrée de type nuller. En particulier j’ai démontré que la mesure de la clôture de phase reste stable pour une cible point source, quel que soit le taux d’annulation interférométrique appliqué. / The detection of exoplanets and young stars environment such as debris disks deals with two major difficulties: on one hand, the low angular distance between companion (or disk) and its host star, and on the other hand, the high contrast of flux between the two components. Interferometry is one of the techniques that solves these two issues. It is particularly the case in the application of nulling interferometry, in which we extinguish the flux from the main star thanks to coherence properties of the light.My thesis work takes part in this context, with the study of eight debris disks observationnal data made at the Very Large Telescope, using the Sparse Aperture Maskig interferometric technique. I achieved the data reduction and the analysis of closure phases. As no companion was found in the data, I derived detection limits in terms of luminosity and estimated mass.My thesis also consisted in an instrumental part, with the conception of the FIRST-IR (Fibered Imager foR a Single Telescope InfraRed) instrument in laboratory. This instrument is an interferometer which associates fibered pupil remapping technique and integrated optic based recombination of light.To conclude, I have shown the viability of FIRST-IR instrument using this new integrated optic based nuller architecture. Particularly, I demonstrated that closure phase remains stable for a source point target, regardless of the nulling level applied.
|
18 |
Computational Phase Correction of a Partially Coherent Multi-Aperture SystemKrug, Sarah Elaine 15 June 2020 (has links)
No description available.
|
19 |
Brain circuits underlying visual stability across eye movements—converging evidence for a neuro-computational model of area LIPZiesche, Arnold, Hamker, Fred H. 15 July 2014 (has links) (PDF)
The understanding of the subjective experience of a visually stable world despite the occurrence of an observer's eye movements has been the focus of extensive research for over 20 years. These studies have revealed fundamental mechanisms such as anticipatory receptive field (RF) shifts and the saccadic suppression of stimulus displacements, yet there currently exists no single explanatory framework for these observations. We show that a previously presented neuro-computational model of peri-saccadic mislocalization accounts for the phenomenon of predictive remapping and for the observation of saccadic suppression of displacement (SSD). This converging evidence allows us to identify the potential ingredients of perceptual stability that generalize beyond different data sets in a formal physiology-based model. In particular we propose that predictive remapping stabilizes the visual world across saccades by introducing a feedback loop and, as an emergent result, small displacements of stimuli are not noticed by the visual system. The model provides a link from neural dynamics, to neural mechanism and finally to behavior, and thus offers a testable comprehensive framework of visual stability.
|
20 |
Design Space Exploration and Optimization of Embedded Memory SystemsRabbah, Rodric Michel 11 July 2006 (has links)
Recent years have witnessed the emergence of microprocessors that are
embedded within a plethora of devices used in everyday life. Embedded
architectures are customized through a meticulous and time consuming
design process to satisfy stringent constraints with respect to
performance, area, power, and cost. In embedded systems, the cost of
the memory hierarchy limits its ability to play as central a
role. This is due to stringent constraints that fundamentally limit
the physical size and complexity of the memory system. Ultimately,
application developers and system engineers are charged with the heavy
burden of reducing the memory requirements of an application.
This thesis offers the intriguing possibility that compilers can play
a significant role in the automatic design space exploration and
optimization of embedded memory systems. This insight is founded upon
a new analytical model and novel compiler optimizations that are
specifically designed to increase the synergy between the processor
and the memory system. The analytical models serve to characterize
intrinsic program properties, quantify the impact of compiler
optimizations on the memory systems, and provide deep insight into the
trade-offs that affect memory system design.
|
Page generated in 0.0657 seconds