• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 18
  • 18
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 190
  • 190
  • 121
  • 65
  • 48
  • 44
  • 35
  • 27
  • 20
  • 18
  • 16
  • 13
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Utilização de objetos de aprendizagem para melhoria da qualidade do ensino de hierarquia de memória / Use of learning objects to improve the quality of the memory hierarchy

Tiosso, Fernando 24 March 2015 (has links)
O ensino e a aprendizagem do tema hierarquia de memória não são tarefas simples, pois muitos assuntos que são abordados em teoria podem desmotivar a aprendizagem em virtude de sua complexidade. Este projeto de mestrado apresenta a transformação do módulo de memória cache da ferramenta Amnesia em um objeto de aprendizagem, que visa facilitar a construção do conhecimento através da simulação da estrutura e da funcionalidade da hierarquia de memória na arquitetura von Neumann de uma maneira mais prática e didática. Este processo permitiu que funcionalidades existentes na ferramenta fossem readequadas e novas funcionalidades desenvolvidas. Aliado a isso, planos de aula e questionários de avaliação e usabilidade também foram concebidos, validados e implementados junto à elaboração de um tutorial para descrever o funcionamento do novo objeto. Os estudos experimentais realizados analisaram dois aspectos: o primeiro, se o objeto de aprendizagem melhorou, de fato, a aprendizagem dos alunos no assunto memória cache; o segundo, a opinião dos alunos em relação à utilização do objeto. Após a análise e avaliação dos resultados obtidos nos experimentos, foi possível demonstrar uma evolução na aprendizagem quando se fez o uso do objeto, além de se perceber que a motivação dos alunos em utilizar outros objetos de aprendizagem aumentou. / The teaching and learning of memory hierarchy are not simple tasks, because many subjects that are covered in theory may demotivate learning because of its complexity. This Master\'s thesis presents the process of transformation of the cache memory module of Amnesia tool in a learning object, aiming to facilitate the construction of knowledge by simulating the structure and functionality of memory hierarchy of von Neumann architecture in a more practice and didactic way. This process allowed existing features in the tool to be adequate and new features developed. In addition, lesson plans and questionnaires of assessment and usability have also been designed, validated and implemented and a tutorial to describe the operation of the new object was developed. Experimental studies have examined two aspects: the first, if the learning object improved, in fact, the students\' learning in the subject cache memory; the second, students\' opinions regarding the use of the object. After the analysis and evaluation of the results obtained in the experiments, was possible show an evolution in learning when it made the use of the object, and also to perceive that students\' motivation to use other learning objects increased.
152

Improving Privacy With Intelligent Cooperative Caching In Vehicular Ad Hoc Networks

Unknown Date (has links)
With the issuance of the Notice of Proposed Rule Making (NPRM) for Vehicle to Vehicle (V2V) communications by the United States National Highway Tra c Safety Administration (NHTSA), the goal of the widespread deployment of vehicular networking has taken a signi cant step towards becoming a reality. In order for consumers to accept the technology, it is expected that reasonable mechanisms will be in place to protect their privacy. Cooperative Caching has been proposed as an approach that can be used to improve privacy by distributing data items throughout the mobile network as they are requested. With this approach, vehicles rst attempt to retrieve data items from the mobile network, alleviating the need to send all requests to a centralized location that may be vulnerable to an attack. However, with this approach, a requesting vehicle may expose itself to many unknown vehicles as part of the cache discovery process. In this work we present a Public Key Infrastructure (PKI) based Cooperative Caching system that utilizes a genetic algorithm to selectively choose members of the mobile network to query for data items with a focus on improving overall privacy. The privacy improvement is achieved by avoiding those members that present a greater risk of exposing information related to the request and choosing members that have a greater potential of having the needed data item. An Agent Based Model is utilized to baseline the privacy concerns when using a broadcast based approach to cache discovery. In addition, an epidemiology inspired mathematical model is presented to illustrate the impact of reducing the number of vehicles queried during cache discovery. Periodic reports from neighboring vehicles are used by the genetic algorithm to identify which neighbors should be queried during cache discovery. In order for the system to be realistic, vehicles must trust the information in these reports. A PKI based approach used to evaluate the trustworthiness of each vehicle in the system is also detailed. We have conducted an in-depth performance study of our system that demonstrates a signi cant reduction in the overall risk of exposure when compared to broadcasting the request to all neighbors. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
153

Semantic Caching for XML Queries

Chen, Li 29 January 2004 (has links)
With the advent of XML, great challenges arise from the demand for efficiently retrieving information from remote XML sources across the Internet. The semantic caching technology can help to improve the efficiency of XML query processing in the Web environment. Different from the traditional tuple or page-based caching systems, semantic caching systems exploit the idea of reusing cached query results to answer new queries based on the query containment and rewriting techniques. Fundamental results on the containment of relational queries have been established. In the XML setting, the containment problem remains unexplored for comprehensive XML query languages such as XQuery, and little has been studied with respect to the cache management issue such as replacement. Hence, this dissertation addresses two issues fundamental to building an XQuery-based semantic caching system: XQuery containment and rewriting, and an effective replacement strategy. We first define a restricted XQuery fragment for which the containment problem is tackled. For two given queries $Q1$ and $Q2$, a preprocessing step including variable minimization and query normalization is taken to transform them into a normal form. Then two tree structures are constructed for respectively representing the pattern matching and result construction components of the query semantics. Based on the tree structures, query containment is reduced to tree homomorphism, with some specific mapping conditions. Important notations and theorems are also presented to support our XQuery containment and rewriting approaches. For the cache replacement, we propose a fine-grained replacement strategy based on the detailed user access statistics recorded on the internal XML view structure. As a result, less frequently used XML view fragments are replaced to achieve a better utilization of the cache space. Finally, we has implemented a semantic caching system called ACE-XQ to realize the proposed techniques. Case studies are conducted to confirm the correctness of our XQuery containment and rewriting approaches by comparing the query results produced by utilizing ACE-XQ against those by the remote XQuery engine. Experimental studies show that the query performance is significantly improved by adopting ACE-XQ, and that our partial replacement helps to enhance the cache hits and utilization comparing to the traditional total replacement.
154

Self Maintenance of Materialized XQuery Views via Query Containment and Re-Writing

Nilekar, Shirish K. 24 April 2006 (has links)
In recent years XML, the eXtensible Markup Language has become the de-facto standard for publishing and exchanging information on the web and in enterprise data integration systems. Materialized views are often used in information integration systems to present a unified schema for efficient querying of distributed and possibly heterogenous data sources. On similar lines, ACE-XQ, an XQuery based semantic caching system shows the significant performance gains achieved by caching query results (as materialized views) and using these materialized views along with query containment techniques for answering future queries over distributed XML data sources. To keep data in these materialized views of ACE-XQ up-to-date, the view must be maintained i.e. whenever the base data changes, the corresponding cached data in the materialized view must also be updated. This thesis builds on the query containment ideas of ACE-XQ and proposes an efficient approach for self-maintenance of materialized views. Our experimental results illustrate the significant performance improvement achieved by this strategy over view re-computation for a variety of situations.
155

Caracterização energética da codificação de vídeo de alta eficiência (HEVC) em processador de propósito geral / Energy characterization of high efficiency video coding (HEVC) in general purpose processor

Monteiro, Eduarda Rodrigues January 2017 (has links)
A popularização das aplicações que manipulam vídeos digitais de altas resoluções incorpora diversos desafios no desenvolvimento de novas e eficientes técnicas para manter a eficiência na compressão de vídeo. Para lidar com esta demanda, o padrão HEVC foi proposto com o objetivo de duplicar as taxas de compressão quando comparado com padrões predecessores. No entanto, para atingir esta meta, o HEVC impõe um elevado custo computacional e, consequentemente, o aumento no consumo de energia. Este cenário torna-se ainda mais preocupante quando considerados dispositivos móveis alimentados por bateria os quais apresentam restrições computacionais no processamento de aplicações multimídia. A maioria dos trabalhos relacionados com este desafio, tipicamente, concentram suas contribuições no redução e controle do esforço computacional refletido no processo de codificação. Entretanto, a literatura indica uma carência de informações com relação ao consumo de energia despendido pelo processamento da codificação de vídeo e, principalmente, o impacto energético da hierarquia de memória cache neste contexto. Esta tese apresenta uma metodologia para caracterização energética da codificação de vídeo HEVC em processador de propósito geral. O principal objetivo da metodologia proposta nesta tese é fornecer dados quantitativos referentes ao consumo de energia do HEVC. Esta metodologia é composta por dois módulos, um deles voltado para o processamento da codificação HEVC e, o outro, direcionado ao comportamento do padrão HEVC no que diz respeito à memória cache. Uma das principais vantagens deste segundo módulo é manter-se independente de aplicação ou de arquitetura de processador. Neste trabalho, diversas análises foram realizadas visando a caracterização do consumo de energia do codificador HEVC em processador de propósito geral, considerando diferentes sequências de vídeo, resoluções e parâmetros do codificador. Além disso, uma análise extensa e detalhada de diferentes configurações possíveis de memória cache foi realizada com o propósito de avaliar o impacto energético destas configurações na codificação. Os resultados obtidos com a caracterização proposta demonstram que o gerenciamento dos parâmetros da codificação de vídeo, de maneira conjunta com as especificações da memória cache, tem um alto potencial para redução do consumo energético de codificação de vídeo, mantendo bons resultados de qualidade visual das sequências codificadas. / The popularization of high-resolution digital video applications brings several challenges on developing new and efficient techniques to maintain the video compression efficiency. To respond to this demand, the HEVC standard was proposed aiming to duplicate the compression rate when compared to its predecessors. However, to achieve such goal, HEVC imposes a high computational cost and, consequently, energy consumption increase. This scenario becomes even more concerned under battery-powered mobile devices which present computational constraints to process multimedia applications. Most of the related works about encoder realization, typically concentrate their contributions on computational effort reduction and management. Therefore, there is a lack of information regarding energy consumption on video encoders, specially about the energy impact of the cache hierarchy in this context. This thesis presents a methodology for energy characterization of the HEVC video encoder in general purpose processors. The main goal of this methodology is to provide quantitative data regarding the HEVC energy consumption. This methodology is composed of two modules, one focuses on the HEVC processing and the other focuses on the HEVC behavior regarding cache memory-related consumption. One of the main advantages of this second module is to remain independent of application or processor architecture. Several analyzes are performed aiming at the energetic characterization of HEVC coding considering different video sequences, resolutions, and parameters. In addition, an extensive and detailed analysis of different cache configurations is performed in order to evaluate the energy impact of such configurations during the video coding execution. The results obtained with the proposed characterization demonstrate that the management of the video coding parameters in conjunction with the cache specifications has a high potential for reducing the energy consumption of video coding whereas maintaining good coding efficiency results.
156

Cache design and timing analysis for preemptive multi-tasking real-time uniprocessor systems

Tan, Yudong 18 April 2005 (has links)
In this thesis, we propose an approach to estimate the Worst Case Response Time (WCRT) of each task in a preemptive multi-tasking single-processor real-time system utilizing an L1 cache. The approach combines inter-task cache eviction analysis and intra-task cache access analysis to estimate the Cache Related Preemption Delay (CRPD). CRPD caused by preempting task(s) is then incorporated into WCRT analysis. We also propose a prioritized cache to reduce CRPD by exploiting cache partitioning technique. Our WCRT analysis approach is then applied to analyze the behavior of a prioritized cache. Four sets of applications with up to six concurrent tasks running are used to test our WCRT analysis approach and the prioritized cache. The experimental results show that our WCRT analysis approach can tighten the WCRT estimate by up to 32% (1.4X) over prior state-of-the-art. By using a prioritized cache, we can reduce the WCRT estimate of tasks by up to 26%, as compared to a conventional set associative cache.
157

Competitive cache replacement strategies for a shared cache

Katti, Anil Kumar 08 July 2011 (has links)
We consider cache replacement algorithms at a shared cache in a multicore system which receives an arbitrary interleaving of requests from processes that have full knowledge about their individual request sequences. We establish tight bounds on the competitive ratio of deterministic and randomized cache replacement strategies when processes share memory blocks. Our main result for this case is a deterministic algorithm called GLOBAL-MAXIMA which is optimum up to a constant factor when processes share memory blocks. Our framework is a generalization of the application controlled caching framework in which processes access disjoint sets of memory blocks. We also present a deterministic algorithm called RR-PROC-MARK which exactly matches the lower bound on the competitive ratio of deterministic cache replacement algorithms when processes access disjoint sets of memory blocks. We extend our results to multiple levels of caches and prove that an exclusive cache is better than both inclusive and non-inclusive caches; this validates the experimental findings in the literature. Our results could be applied to shared caches in multicore systems in which processes work together on multithreaded computations like Gaussian elimination paradigm, fast Fourier transform, matrix multiplication, etc. In these computations, processes have full knowledge about their individual request sequences and can share memory blocks. / text
158

Active management of Cache resources

Ramaswamy, Subramanian 08 July 2008 (has links)
This dissertation addresses two sets of challenges facing processor design as the industry enters the deep sub-micron region of semiconductor design. The first set of challenges relates to the memory bottleneck. As the focus shifts from scaling processor frequency to scaling the number of cores, performance growth demands increasing die area. Scaling the number of cores also places a concurrent area demand in the form of larger caches. While on-chip caches occupy 50-60% of area and consume 20-30% of energy expended on-chip, their performance and energy efficiencies are less than 15% and 1% respectively for a range of benchmarks! The second set of challenges is posed by transistor leakage and process variation (inter-die and intra-die) at future technology nodes. Leakage power is anticipated to increase exponentially and sharply lower defect-free yield with successive technology generations. For performance scaling to continue, cache efficiencies have to improve significantly. This thesis proposes and evaluates a broad family of such improvements. This dissertation first contributes a model for cache efficiencies and finds them to be extremely low - performance efficiencies less than 15% and energy efficiencies in the order of 1%. Studying the sources of inefficiency leads to a framework for efficiency improvement based on two interrelated strategies. The approach for improving energy efficiency primarily relies on sizing the cache to match the application memory footprint during a program phase while powering down all remaining cache sets. Importantly, the sized is fully functional with no references to inactive sets. Improving performance efficiency primarily relies on cache shaping, i.e., changing the placement function and thereby the manner in which memory shares the cache. Sizing and shaping are applied at different phase of the design cycle: i) post-manufacturing & offline, ii) at compile-time, and at iii) run-time. This thesis proposes and explores techniques at each phase collectively realizing a repertoire of techniques for future memory system designers. The techniques use a combination of HW-SW techniques and are demonstrated to provide substantive improvements with modest overheads.
159

Improving instruction fetch rate with code pattern cache for superscalar architecture

Beg, Azam Muhammad, January 2005 (has links)
Thesis (Ph.D.) -- Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
160

Técnicas de redução de potência estática em memórias CMOS SRAM e aplicação da associação de MOSFETs tipo TST em nano-CMOS / Static energy reduction techniques for CMOS SRAM memories and TST MOSFET association application for nano-CMOS

Conrad Junior, Eduardo January 2009 (has links)
Em nossos dias a crescente busca por portabilidade e desempenho resulta em esforços focados na maximização da duração de bateria dos equipamentos em fabricação, ou seja, busca-se a conflitante solução de circuitos com baixo consumo e ao mesmo tempo com alto desempenho. Neste contexto usualmente na composição de equipamentos portáteis empregam-se SOC´s (Systems On Chip) o que barateia o custo de produção e integração destes circuitos. SOC´s são sistemas completos que executam uma determinada função integrados em uma pastilha de silício única, normalmente possuem memórias SRAM como componente do sistema, que são utilizadas como memórias de alta performance e baixa latência e/ou também como caches. O grande desafio de projeto em memórias SRAMS é a relação de desempenho versus potência consumida a ser otimizada. Basicamente por sua construção estes circuitos apresentam alto consumo de potência, dinâmica e estática, relacionada a primeira diretamente ao aumento de freqüência de operação. Um dos focos desta dissertação é explorar soluções para a redução de consumo de energia tanto dinâmica como estática, sendo a redução de consumo estático de células de memória em standby buscando desempenho, estabilidade e baixo consumo de energia. No desenvolvimento de técnicas para projeto de circuitos analógicos em tecnologias nanométricas, os TST´s (T-Shaped Transistors – Transistor tipo T) surgem como dispositivos com características potenciais para projeto analógico de baixa potência. TSTs / TATs (Trapezoidal Associations of Transistors – Associação Trapezoidal de transistores) são estruturas self-cascode que podem tornar-se uma boa escolha por apresentar redução do leakage, redução na área utilizada e com incremento na regularidade do layout e no casamento entre transistores, propriedade importantíssima para circuitos analógicos. Sendo este o segundo foco deste texto através do estudo e análise das medidas elétricas dos TSTs executadas para comprovação das características destes dispositivos. Também apresenta-se uma análise das possibilidades de utilização dos TSTs em projeto analógico para tecnologias nanométricas. / Nowadays the increasing needs for portability and performance has resulted in efforts to increase battery life, i. e., the conflicting demands for low power consumption and high performance circuits. In this context using SOC´s (System On Chip) in the development for portable equipments composition, an integration of an entire system for a given function in a single silicon die will provide less production costs and less integration costs. SOC´s normally include a SRAM memory as its building block and are used to achieve memories with low latency and short access time or (and) as caches. A performance versus power consumption analysis of SRAM memory building blocks shows a great challenge to be solved. The electrical design aspects of these blocks reveal high power consumption, dynamic and static, and the former is directly proportional to the operating frequency. The design space exploration for dynamic and leakage consumption reduction in these circuits is one of the focus of this work. The main contribution of this topic is the leakage reduction techniques based in performance, stability and low energy consumption for the memory cell stand-by mode. Among the electrical techniques developed for analog circuits at the 20-100 nanometer scale, the TST (T-Shaped Transistors) rises with potential characteristics for analog low power design. TST /TAT (Trapezoidal Associations of Transistors) are selfcascode structures and can be turning into a good alternative for leakage and area reduction. Another point is the increment in mismatch and layout regularity, all these characteristics being very important in analog designs. The TST electrical measurements study and analysis are developed to show the device properties. An analysis of the TST desired properties and extrapolation for nanometer technologies analog design are also presented.

Page generated in 0.0542 seconds