• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Managing dynamic non-uiform cache architectures

Lira Rueda, Javier 25 November 2011 (has links)
Researchers from both academia and industry agree that future CMPs will accommodate large shared on-chip last-level caches. However, the exponential increase in multicore processor cache sizes accompanied by growing on-chip wire delays make it difficult to implement traditional caches with a single, uniform access latency. Non-Uniform Cache Access (NUCA) designs have been proposed to address this situation. A NUCA cache divides the whole cache memory into smaller banks that are distributed along the chip and can be accessed independently. Response time in NUCA caches does not only depend on the latency of the actual bank, but also on the time required to reach the bank that has the requested data and to send it to the core. So, the NUCA cache allows those banks that are located next to the cores to have lower access latencies than the banks that are further away, thus mitigating the effects of the cache’s internal wires. These cache architectures have been traditionally classified based on their placement decisions as static (S-NUCA) or dynamic (DNUCA). In this thesis, we have focused on D-NUCA as it exploits the dynamic features that NUCA caches offer, like data migration. The flexibility that D-NUCA provides, however, raises new challenges that hardens the management of this kind of cache architectures in CMP systems. We have identified these new challenges and tackled them from the point of view of the four NUCA policies: replacement, access, placement and migration. First, we focus on the challenges introduced by the replacement policy in D-NUCA. Data migration makes most frequently accessed data blocks to be concentrated on the banks that are closer to the processors. This creates big differences in the average usage rate of the NUCA banks, being the banks that are close to the processors the most accessed banks, while the banks that are further away are not accessed so often. Upon a replacement in a particular bank of the NUCA cache, the probabilities of the evicted data block to be reused by the program will differ if its last location in the NUCA cache was a bank that are close to the processors, or not. The decentralized nature of NUCA, however, prevents a NUCA bank from knowing that other bank is constantly evicting data blocks that are later being reused. We propose three different techniques to dealwith the replacement policy, being The Auction the most successful one. Then, we deal with the challenges in the access policy. As data blocks can be mapped in multiple banks within the NUCA cache. Finding the requesting data in a D-NUCA cache is a difficult task. In addition, data can freely move between these banks, thus the search scheme must look up all banks where the requesting data block can be mapped to ascertain if it is in the NUCA cache, or not. We have proposed HK-NUCA. This is a search scheme that uses home knowledge to effectively reduce the average number of messages introduced to the on-chip network to satisfy a memory request. With regard to the placement policy, this thesis shows the implementation of a hybrid NUCA cache. We have proposed a novel placement policy that accomodates both memory technologies, SRAM and eDRAM, in a single NUCA cache. Finally, in order to deal with the migration policy in D-NUCA caches, we propose The Migration Prefetcher. This is a technique that anticipates data migrations. Summarizing, in this thesis we propose different techniques to efficiently manage future D-NUCA cache architectures on CMPs. We demonstrate the effectivity of our techniques to deal with the challenges introduced by D-NUCA caches. Our techniques outperform existing solutions in the literature, and are in most cases more energy efficient. / CMPs actuales integran memorias cache de último nivel cada vez más grandes dentro del chip. Roadmaps en la industria y trabajos en ámbito académico muestran que esta tendencia seguirá en los próximos años. Sin embargo, los altos retrasos en la red de interconexión y el cableado hace que sea cada vez más difícil de implementar memorias cachés tradicionales con una única y uniforme latencia de acceso. Para solventar esta situación aparecieron los diseños NUCA (Non-Uniform Cache Access). Una caché de tipo NUCA divide una memoria grande en bloques más pequeños que se distribuyen a lo largo del chip y pueden ser accedidos de manera independiente. De esta manera el tiempo de respuesta en una caché NUCA no depende sólo de la latencia de un banco, sino que también se tiene en cuenta el tiempo de enrutamiento de la petición hasta y desde el banco de la NUCA que responde. La posición física de un banco en el chip es clave para determinar la latencia de acceso a NUCA, entonces bancos que se encuentren más cerca de los cores tendrán menores latencias de acceso que otros que estén más alejados. Las cachés NUCA se pueden clasificar como estáticas (S-NUCA) o dinámicas (D-NUCA), basándonos en sus decisiones de emplazamiento. Esta tesis se centra en D-NUCA. Este diseño permite a un dato migrar de banco en banco a fín de reducir la latencia de futuros accesos a ese dato, pero también ofrece otros retos que deben ser investigados para gestionar estas cachés de manera eficiente. Hemos identificado y explorado estos retos desde el punto de vista de las cuatro políticas NUCA: reemplazo, acceso, emplazamiento y migración. En primer lugar nos hemos centrado en la política de reemplazo. La migración de datos permite que los datos que se utilizan más frequentemente se concentren en aquellos bancos que estan más cerca de los cores. Ésto crea grandes diferencias en el uso medio de los bancos en NUCA, siendo los bancos cercanos a los cores los más accedidos, mientras que los bancos lejanos no se acceden tan a menudo. Debido a las diferencias en la frequencia de reemplazos entre bancos, las probabilidades de que el dato expulsado sea reusado en un futuro crecerán o disminuirán dependiendo del banco donde se efectuó el reemplazo. Por otro lado, los trabajos previos en la política de reemplazo no son efectivos en este tipo de cachés ya que los bancos trabajan de manera independiente. Nosotros proponemos tres técnicas de reemplazo para NUCA, siendo The Auction la técnica con mayor beneficio. En cuanto a los retos con la política de acceso, como los datos se pueden mapear en diversos bancos dentro de la caché NUCA, encontrarlos se convierte en una tarea complicada y costosa. Aquí, nosotros proponemos HK-NUCA. Es un algoritmo de acceso que usa el conocimiento integrado en los bancos "home" para reducir de manera eficiente el número medio de accesos necesarios para resolver una petición de memoria. Para analizar la política de emplazamiento, esta tesis muestra la implementación de una caché NUCA híbrida. Nuestra política de emplazamiento permite integrar ambas tecnologías, SRAM y eDRAM, en un único nivel de cache NUCA. Finalmente, en cuanto a la migración en D-NUCA, hemos propuesto The Migration Prefetcher. Es una técnica que permite anticipar migraciones de datos usando el conocimiento adquirido por el historial de accesos. En resumen, esta tesis propone diferentes técnicas para gestionar de manera eficiente las futuras arquitecturas de memoria caché D-NUCA en un entorno CMP. A lo largo de la tesis, demostramos la efectividad de las técnicas propuestas para paliar los efectos inducidos por el hecho de utilizar cachés D-NUCA. Estas técnicas, además de obtener mayor rendimiento que otros mecanismos existentes en la literatura, son en muchos casos más eficientes en términos de energía.
12

Modelagem de espaços inteligentes pessoais e espaços inteligentes fixos no contexto de cenários de computação ubíqua / Personal and fixed smart space modeling in the context of ubiquitous computing scenarios

Vieira, Marcos Alves 26 February 2016 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2016-04-18T14:16:18Z No. of bitstreams: 2 Dissertação - Marcos Alves Vieira - 2016.pdf: 4271419 bytes, checksum: 4b956a9d65582e1f1e1685988c493f7c (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-04-18T14:17:45Z (GMT) No. of bitstreams: 2 Dissertação - Marcos Alves Vieira - 2016.pdf: 4271419 bytes, checksum: 4b956a9d65582e1f1e1685988c493f7c (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2016-04-18T14:17:45Z (GMT). No. of bitstreams: 2 Dissertação - Marcos Alves Vieira - 2016.pdf: 4271419 bytes, checksum: 4b956a9d65582e1f1e1685988c493f7c (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2016-02-26 / Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG / Advances in electronics allow the creation of everyday devices with computing capabilities, called smart objects. Smart objects assist people in carrying out a variety of tasks and compose smart spaces. When smart spaces are confined to a certain area, they can be referred to as fixed smart spaces. In complement of these, personal smart spaces are composed by the smart objects a user carries with himself, hence with boundaries moving along with his owner. However, user mobility and the increasing number of smart spaces, fostered also by the Internet of Things (IoT) and the Web of Things (WoT), can lead to smart spaces overlap, where a certain smart object is configured in different smart spaces, whether fixed or personal. In addition, smart spaces are complex and difficult to model and maintain, as, among other factors, they have to deal with different smart objects. This thesis proposes the use of Model-Driven Engineering to enable modeling of ubiquitous computing scenarios, considering the coexistence between fixed and personal smart spaces. Its contributions include a metamodel for modeling scenarios composed by personal smart spaces and fixed smart spaces, together with a language and an algorithm, aiming at determining the access order to the resources of a ubiquitous computing scenario. The validation of the proposal was carried out based on the results of a Systematic Literature Review, conducted in order to identify metamodel validation methods most commonly used by researchers within the field. Thus, scenarios were modeled with the aid of modeling tools, which were constructed to produce models, conforming to the proposed metamodels. An implementation in Java enabled to validate the access policy language as well as its processing algorithm. / Os avanços em eletrônica estão possibilitando a criação de dispositivos do cotidiano com capacidades computacionais, chamados de objetos inteligentes. Os objetos inteligentes auxiliam as pessoas na realização de suas tarefas e compõem os espaços inteligentes. Quando os espaços inteligentes são restritos a uma determinada área, eles são denominados espaços inteligentes fixos. Em complementação a estes, os espaços inteligentes pessoais são formados pelos objetos inteligentes que um usuário carrega consigo e seus limites se movem juntamente com seu “dono”. No entanto, a mobilidade dos usuários e o crescente número de espaços inteligentes, fomentados também pela Internet das Coisas e Web das Coisas, podem levar à sobreposição de espaços inteligentes, onde um determinado objeto inteligente pode ser utilizado em diferentes espaços inteligentes, sejam estes fixos ou pessoais. Além disso, espaços inteligentes são complexos e difíceis de modelar e manter, pois, entre outros fatores, precisam lidar com diferentes objetos inteligentes. Este trabalho propõe o uso de técnicas de Engenharia Dirigida por Modelos para possibilitar a modelagem de cenários de computação ubíqua, considerando a coexistência entre espaços inteligentes fixos e pessoais. As contribuições deste trabalho incluem: um metamodelo para modelagem de cenários compostos por espaços inteligentes pessoais e espaços inteligentes fixos; e uma linguagem e um algoritmo, com objetivo de permitir determinar a ordem de acesso aos recursos de um cenário de computação ubíqua. A validação da proposta se deu com base nos resultados de uma Revisão Sistemática da Literatura, que foi conduzida com objetivo de identificar as formas de validação e avaliação de metamodelos mais utilizadas pelos pesquisadores da área. Dessa forma, cenários foram modelados com o auxílio de ferramentas de modelagem construídas para produzir modelos em conformidade com os metamodelos propostos. Uma implementação em linguagem Java permitiu validar tanto a linguagem de políticas de acesso quanto seu algoritmo de processamento.
13

Faculty Senate Minutes April 4, 2016

University of Arizona Faculty Senate 03 May 2016 (has links)
This item contains the agenda, minutes, and attachments for the Faculty Senate meeting on this date. There may be additional materials from the meeting available at the Faculty Center.
14

Faculty Senate Minutes December 4, 2017

University of Arizona Faculty Senate 06 February 2018 (has links)
This item contains the agenda, minutes, and attachments for the Faculty Senate meeting on this date. There may be additional materials from the meeting available at the Faculty Center.

Page generated in 0.0414 seconds