• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 54
  • 41
  • 39
  • 23
  • 16
  • 15
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 745
  • 291
  • 279
  • 144
  • 100
  • 93
  • 90
  • 87
  • 79
  • 70
  • 65
  • 46
  • 44
  • 43
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Geocaching v chráněné krajinné oblasti Beskydy

Tabáčková, Alena January 2016 (has links)
Thesis describes the state of geocaching phenomena on the protected landscape area Beskydy. It focuses on detailed description of the cache network in the studied area. The theory focused on the terminology and typology of all existing caches is presented and along with the summary of historical evolution in the Czech Republic and other parts of the world. Important facts for hiding geocache in the protected landscape areas and their possible impacts on the landscape are defined. Thesis is composed of theoretical part which describes essential information needed for the experimental part which deals with the analysis of the obtained data. Important outputs and recommendations for Protected Landscape Area Administration Beskydy are presented in the final part the thesis.
302

Increasing embedded software radiation reliability through cache memories

Santini, Thiago Caberlon January 2015 (has links)
Memórias cache são tradicionalmente desabilitadas em aplicações espaciais e críticas porque acredita-se que a área sensível por elas introduzida comprometeria a confiabilidade do sistema. Conforme a tecnologia tem evoluído, a diferença de velocidade entre lógica e memória principal tem aumentado de tal maneira que desabilitando as caches a execução do código é retardada muito mais do que no passado. Como resultado, o processador fica exposto por um tempo muito maior para computar a mesma cargade trabalho. Neste trabalho nós demonstraremos que, em processadores embarcados modernos, habilitar as caches pode trazer benefícios para sistemas críticos: a área exposta maior pode ser compensada pelo tempo de exposição mais curto, levando a uma melhora total na confiabilidade. Nós propomos uma métrica intuitiva e um modelo matemático para avaliar a confiabilidade de um sistema em termos espaciais (i.e., área sensível à radiação) e temporais (i.e., desempenho), e provamos que minimizar a área sensível à radiação não necessariamente maximiza a confiabilidade da aplicação. A métrica e o modelo propostos são experimentalmente validados através de uma campanha extensiva de testes de radiação utilizando um Sistema-em-Chip de prateleira fabricado em 28nm baseado em processadores ARM como estudo de caso. Os resultados experimentais demonstram que, durante a execução da aplicação estudada à altitude de aeronave militar, a probabilidade de executar a carga de trabalho de uma missão de dois anos sem falhas é aumentada em 5.85% se as caches L1 são habilitadas (deste modo, aumentado a área sensível à radiação), quando comparada com nenhum nível de cache habilitado. Entretanto, se ambos níveis L1 e L2 são habilitados a probabilidade é diminuída em 31.59%. / Cache memories are traditionally disabled in space-level and safety-critical applications since it is believed that the sensitive area they introduce would compromise the system reliability. As the technology has evolved, the speed gap between logic and main memory has increased in such a way that disabling caches slows the code much more than in the past. As a result, the processor is exposed for a much longer time in order to compute the same workload. In this work we demonstrate that, on modern embedded processors, enabling caches may bring benefits to critical systems: the larger exposed area may be compensated by the shorter exposure time, leading to an overall improved reliability. We propose an intuitive metric and a mathematical model to evaluate system reliability in spatial (i.e., radiation-sensitive area) and temporal (i.e., performance) terms, and prove that minimizing radiation-sensitive area does not necessarily maximize application reliability. The proposed metric and model are experimentally validated through an extensive radiation test campaign using a 28nm off-the-shelf ARM-based Systemon- Chip as a case study. The experimental results demonstrate that, while executing the considered application at military aircraft altitude, the probability of executing a two-year mission workload without failures is increased by 5.85% if L1 caches are enabled (thus, increasing the radiation-sensitive area), when compared to no cache level being enabled. However, if both L1 and L2 caches are enabled the probability is decreased by 31.59%.
303

CARBON AND NITROGEN CYCLING IN GIANT CANE (ARUNDINARIA GIGANTEA (WALT.) MUHL.) RIPARIAN ECOSYSTEMS

Nelson, Amanda 01 May 2015 (has links)
Large stands of Arundinaria gigantea (Walt.) Muhl., called canebrakes, were vital to wildlife and lowland ecosystem functions and historically covered millions of acres in the southeastern United States. Since European settlement, human disturbance (i.e, clearing for agriculture and fire suppression) has caused giant canebrakes to become critically endangered ecosystems. Increasing evidence suggests the loss of canebrakes has directly impacted riparian ecosystems, resulting in increased soil erosion, poorer water quality, and reduced flood control. Cane's ecological importance has led to an increased interest in canebrake restoration in riparian zones. To examine the role that cane plays in nutrient cycling and to attempt to determine targeted restoration sites, a four phase research strategy was designed to determine physical and chemical properties of existing riparian stands of native giant cane and their associated soils. Phase one was a GIS analysis to determine what geographical features may be used in selecting sites within a landscape suitable for canebrake restoration. First, common physical site characteristics for 140 existing southern Illinois canebrakes were determined. Soil taxonomy and pH were used to represent soil characteristics and percent slope was used as a topographic metric. These factors, combined with digital elevation models and land cover in GIS were used to identify the potential suitability of sites within the watershed for canebrake plantings and general riparian restoration. The following soil characteristics were determined to be associated with giant cane success: percentage of area containing slopes of 3 percent or less, fine to coarse-silty textures, pH of 5.3 - 6.7, effective cation exchange capacity of less than 30 units, available water holding capacity greater than 0.12, bulk density of 1.37 - 1.65 g cm-3, and percent clay of 11 - 55. Eighty-percent of existing giant cane sites were found within these slope and soil characteristics. The total area of potential riparian canebrake landscapes based on these parameters is 13,970 hectares (35,600 acres) within the Cache River watershed. The remaining three phases examined the role that cane plays in nutrient cycling. Phase two determined the pools and cycling of nitrogen and carbon in canebrakes and compared those to nearby agricultural and forested riparian areas. Phase three quantified the N2O and CO2 fluxes from canebrakes and adjacent forested areas. Phase four included methods to quantify nutrient content of leaf litter and live leaves from existing canebrakes to estimate the nutrient use efficiency of cane. Further, a decomposition study was conducted to calculate the decomposition rate of cane leaves and to explore the litter quality attributes of giant cane. The primary purpose of phase two was to compare the effects of perennial riparian vegetation (giant cane and forest) and annual crops on soil quality, nitrogen cycling, and physical properties. This was to determine if any of them have a significant influence on giant cane distribution, while focusing on nitrogen dynamics to help determine why giant cane is a successful riparian buffer species. Five study sites in the Cache River watershed that had cane, agricultural fields (corn-soybean rotation), and forested areas adjacent to one another were selected. Data were collected on soil texture, carbon/nitrogen ratios, bulk density, nitrogen content (as ammonia and nitrate), and net nitrogen mineralization rates. The crop sites had significantly lower soil C:N ratios than both forest and cane (9.8:1 vs. 10.9:1 and 10.7:1, respectively), though all sites had ratios less than 25:1, indicating a tendency toward nitrogen mineralization. Forest soils had significantly higher rates of net mineralization than cane (19.0 μg m-2 day-1 and 6.6 μg m-2 day-1, respectively), with crop not significantly different from either cane or forest (8.0 μg m-2 day-1). Cane had higher levels of soil carbon and nitrogen when compared to forest and crop soils. Cane can be successful in wetter areas than previously thought, implying that the range of conditions that will support cane is broader than previously thought. Overall, there were few identifiable soil controls on giant cane distribution, or those that differentiate long-standing canebrakes from the nearby crop and forest land. For Phase three, nitrous oxide and carbon dioxide emissions were measured monthly for one year in riparian canebrakes and forests in southern Illinois to determine the rates of greenhouse gas (GHG) fluxes in bottomland riparian areas. Carbon dioxide emissions had a strong correlation with soil temperature (p < 0.001, r2= 0.54), but not with soil water content (p > 0.05), and were greater during the warmer months. Nitrous oxide emissions had a correlation with soil water content (p=0.470, r2 = 0.11), but no relation with soil temperature (p > 0.05), nor a difference across time. Vegetation type did not appear to influence GHG fluxes. Riparian CO2 and N2O emission rates were higher than documented cropland emissions, indicating riparian restoration projects to reduce NO3 delivery to streams may affect N2O and CO2 emissions resulting in an ecosystem tradeoff between water quality and air quality. Leaf deposition, N resorption efficiency and proficiency, and decomposition rates were analyzed in riparian stands of Arundinaria gigantea in southern Illinois for the first time in Phase four. Leaf litter was collected from five established canebrakes monthly over one year and a decomposition study was conducted over 72 weeks. Live leaves, freshly senesced leaves, and decomposed leaves were analyzed for carbon and nitrogen content. Leaf litterfall biomass peaked in November at twice the monthly average for all but one site, indicating a resemblance to deciduous leaf fall patterns. Nitrogen and carbon levels decreased 48% and 30%, respectively, between live leaves and 72 weeks decomposed. High soil moisture appeared to slow decomposition rates, perhaps due to the creation of anaerobic conditions. Cane leaves have low resorption proficiency and nutrient use proficiency, suggesting that these riparian canebrakes are not nitrogen limited. These results will help improve our understanding of the role that giant cane plays in a riparian ecosystem and help focus cane restoration efforts in southern Illinois.
304

Bounding the Worst-Case Response Times of Hard-Real-Time Tasks under the Priority Ceiling Protocol in Cache-Based Architectures

Poluri, Kaushik 01 August 2013 (has links)
AN ABSTRACT OF THE THESIS OF KAUSHIK POLURI, for the Master of Science degree in Electrical and Computer Engineering, presented on 07/03/2013, at Southern Illinois University Carbondale. TITLE: Bounding the Worst-Case Response Times of Hard-Real-Time Tasks under the Priority Ceiling Protocol in Cache-Based Architectures MAJOR PROFESSOR: Dr. HARINI RAMAPRASAD Schedulability analysis of hard-real-time systems requires a-priori knowledge of the worst-case execution times (WCET) of all tasks. Static timing analysis is a safe technique used for calculating WCET that attempts to model program complexity, architectural complexity and complexity introduced by interference from other tasks. Modern architectural features such as caches make static timing analysis of a single task challenging due to unpredictability introduced by their reliance on the history of memory accesses and the analysis of a set of tasks even more challenging due to cache-related interference among tasks. Researchers have proposed several static timing analysis techniques that explicitly consider cache-eviction delays for independent hard-real-time tasks executing on cache-based architectures. However, there is little research in this area for resource-sharing tasks. Recently, an analysis technique was proposed for systems using the Priority Inheritance Protocol (PIP) to manage resource-arbitration among tasks. The Priority Ceiling Protocol (PCP) is a resource-arbitration protocol that offers distinct advantages over the PIP, including deadlock avoidance. However, to the best of our knowledge, there is currently no technique to bound the WCET of resource-sharing tasks under PCP with explicit consideration of cache-eviction delays. This thesis presents a technique to bound the WCETs and hence, the Worst-Case Response Times (WCRTs) of resource-sharing hard-real-time tasks executing on cache-based uniprocessor systems, specifically focusing on data cache analysis.
305

Performance evaluation of Cache-Based Multi-Core Architectures with Networks-on-Chip

Rajkumar, Robin Kingsley 01 December 2012 (has links)
Multi-core architectures are the future for high-performance computing and are omnipresent these days; what was a vision some twenty years back is now a reality with most personal computers/laptops now running on multi-cores making them ubiquitous in today's world. However, as the number of cores continue scaling with time, there will be serious throughput and performance issues with relation to the network topologies used in connecting the cores. Among possible network topologies under consideration in modern multi-core systems, the `Mesh' topology is widely used. In terms of performance, the `Point to Point topology' would outperform all other topologies such as Crossbar, Mesh and Torus. The `Point to Point' topology does include additional expenses with respect to more links needed to connect each core to every other core in the network. Its expensive implementation cost is the reason it is not preferred in the industry for general use systems. But, for research purposes it serves as the best network topology alternative to the `Mesh' for higher speed in computer systems. However, the characteristics of the tasks executing on the cores will also have a significant impact on topology performance. So, with the scaling of multi-cores from 10 to 1000 cores per chip and more, selection of the right network topology is of importance. Another interesting factor to consider is the effect of the cache on these multi-core systems with respect to each of these topologies. Cache coherency is and will be a major cause for throughput decrease as cores scale. In our work, we are using the Modified-Exclusive-Shared-Invalid (MESI) Cache Coherency protocol for all the above mentioned network topologies considered. In this thesis, we investigate the effect of varying cache parameters such as the sizes of L1 Instruction cache, L1 Data cache and L2 cache and their respective associativities on each network topology. Various combinations of all these four parameters were considered as we ran experiments. We use the gem5 Computer Architecture Simulator for running our experiments with 4 core models. For benchmark purposes, we use the SPLASH-2 set of {\it `High Performance Computing'} benchmarks. A benchmark is assigned to each core. We also observe the effects of running benchmarks with similar characteristics on all cores versus comparing them with a set of different benchmarks while keeping all other parameters constant. Through our results, we attempt to give researchers and the industry at large a better view of the advantages and disadvantages along with the relationship between multi-cores, the cache and network topologies for multi-core systems.
306

Aperiodic Job Handling in Cache-Based Real-Time Systems

Motakpalli, Sankalpanand 01 December 2017 (has links)
Real-time systems require a-priori temporal guarantees. While most of the normal operation in such a system is modeled using time-driven, hard-deadline sporadic tasks, event-driven behavior is modeled using aperiodic jobs with soft or no deadlines. To provide good Quality-of- Service for aperiodic jobs in the presence of sporadic tasks, aperiodic servers were introduced. Aperiodic servers act as a sporadic task and reserve a quota periodically to serve aperiodic jobs. The use of aperiodic servers in systems with caches is unsafe because aperiodic servers do not take into account, the indirect cache-related preemption delays that the execution of aperiodic jobs might impose on the lower-priority sporadic tasks, thus jeopardizing their safety. To solve this problem, we propose an enhancement to the aperiodic server that we call a Cache Delay Server. Here, each lower-priority sporadic task is assigned a delay quota to accommodate the cache-related preemption delay imposed by the execution of aperiodic jobs. Aperiodic jobs are allowed to execute at their assigned server priority only when all the active lower-priority sporadic tasks have a sufficient delay quota to accommodate it. Simulation results demonstrate that a Cache Delay Server ensures the safety of sporadic tasks while providing acceptable Quality-of-Service for aperiodic jobs. We propose a Integer Linear Program based approach to calculate delay quotas for sporadic tasks within a task set where Cache Delay Servers have been pre-assigned. We then propose algorithms to determine Cache Delay Server characteristics for a given sporadic task set. Finally, we extend the Cache Delay Server concept to multi-core architectures and propose approaches to schedule aperiodic jobs on appropriate Cache Delay Servers. Simulation results demonstrate the effectiveness of all our proposed algorithms in improving aperiodic job response times while maintaining the safety of sporadic task execution.
307

Real-time operating system support for multicore applications

Gracioli, Giovani January 2014 (has links)
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2014 / Made available in DSpace on 2015-02-05T21:15:28Z (GMT). No. of bitstreams: 1 328605.pdf: 3709437 bytes, checksum: 81e0fb95e092d5a351413aae5a972ac2 (MD5) Previous issue date: 2014 / Plataformas multiprocessadas atuais possuem diversos níveis da memória cache entre o processador e a memória principal para esconder a latência da hierarquia de memória. O principal objetivo da hierarquia de memória é melhorar o tempo médio de execução, ao custo da previsibilidade. O uso não controlado da hierarquia da cache pelas tarefas de tempo real impacta a estimativa dos seus piores tempos de execução, especialmente quando as tarefas de tempo real acessam os níveis da cache compartilhados. Tal acesso causa uma disputa pelas linhas da cache compartilhadas e aumenta o tempo de execução das aplicações. Além disso, essa disputa na cache compartilhada pode causar a perda de prazos, o que é intolerável em sistemas de tempo real críticos. O particionamento da memória cache compartilhada é uma técnica bastante utilizada em sistemas de tempo real multiprocessados para isolar as tarefas e melhorar a previsibilidade do sistema. Atualmente, os estudos que avaliam o particionamento da memória cache em multiprocessadores carecem de dois pontos fundamentais. Primeiro, o mecanismo de particionamento da cache é tipicamente implementado em um ambiente simulado ou em um sistema operacional de propósito geral. Consequentemente, o impacto das atividades realizados pelo núcleo do sistema operacional, tais como o tratamento de interrupções e troca de contexto, no particionamento das tarefas tende a ser negligenciado. Segundo, a avaliação é restrita a um escalonador global ou particionado, e assim não comparando o desempenho do particionamento da cache em diferentes estratégias de escalonamento. Ademais, trabalhos recentes confirmaram que aspectos da implementação do SO, tal como a estrutura de dados usada no escalonamento e os mecanismos de tratamento de interrupções, impactam a escalonabilidade das tarefas de tempo real tanto quanto os aspectos teóricos. Entretanto, tais estudos também usaram sistemas operacionais de propósito geral com extensões de tempo real, que afetamos sobre custos de tempo de execução observados e a escalonabilidade das tarefas de tempo real. Adicionalmente, os algoritmos de escalonamento tempo real para multiprocessadores atuais não consideram cenários onde tarefas de tempo real acessam as mesmas linhas da cache, o que dificulta a estimativa do pior tempo de execução. Esta pesquisa aborda os problemas supracitados com as estratégias de particionamento da cache e com os algoritmos de escalonamento tempo real multiprocessados da seguinte forma. Primeiro, uma infraestrutura de tempo real para multiprocessadores é projetada e implementada em um sistema operacional embarcado. A infraestrutura consiste em diversos algoritmos de escalonamento tempo real, tais como o EDF global e particionado, e um mecanismo de particionamento da cache usando a técnica de coloração de páginas. Segundo, é apresentada uma comparação em termos da taxa de escalonabilidade considerando o sobre custo de tempo de execução da infraestrutura criada e de um sistema operacional de propósito geral com extensões de tempo real. Em alguns casos, o EDF global considerando o sobre custo do sistema operacional embarcado possui uma melhor taxa de escalonabilidade do que o EDF particionado com o sobre custo do sistema operacional de propósito geral, mostrando claramente como diferentes sistemas operacionais influenciam os escalonadores de tempo real críticos em multiprocessadores. Terceiro, é realizada uma avaliação do impacto do particionamento da memória cache em diversos escalonadores de tempo real multiprocessados. Os resultados desta avaliação indicam que um sistema operacional "leve" não compromete as garantias de tempo real e que o particionamento da cache tem diferentes comportamentos dependendo do escalonador e do tamanho do conjunto de trabalho das tarefas. Quarto, é proposto um algoritmo de particionamento de tarefas que atribui as tarefas que compartilham partições ao mesmo processador. Os resultados mostram que essa técnica de particionamento de tarefas reduz a disputa pelas linhas da cache compartilhadas e provê garantias de tempo real para sistemas críticos. Finalmente, é proposto um escalonador de tempo real de duas fases para multiprocessadores. O escalonador usa informações coletadas durante o tempo de execução das tarefas através dos contadores de desempenho em hardware. Com base nos valores dos contadores, o escalonador detecta quando tarefas de melhor esforço o interferem com tarefas de tempo real na cache. Assim é possível impedir que tarefas de melhor esforço acessem as mesmas linhas da cache que tarefas de tempo real. O resultado desta estratégia de escalonamento é o atendimento dos prazos críticos e não críticos das tarefas de tempo real.<br> / Abstracts: Modern multicore platforms feature multiple levels of cache memory placed between the processor and main memory to hide the latency of ordinary memory systems. The primary goal of this cache hierarchy is to improve average execution time (at the cost of predictability). The uncontrolled use of the cache hierarchy by realtime tasks may impact the estimation of their worst-case execution times (WCET), specially when real-time tasks access a shared cache level, causing a contention for shared cache lines and increasing the application execution time. This contention in the shared cache may leadto deadline losses, which is intolerable particularly for hard real-time (HRT) systems. Shared cache partitioning is a well-known technique used in multicore real-time systems to isolate task workloads and to improve system predictability. Presently, the state-of-the-art studies that evaluate shared cache partitioning on multicore processors lack two key issues. First, the cache partitioning mechanism is typically implemented either in a simulated environment or in a general-purpose OS (GPOS), and so the impact of kernel activities, such as interrupt handlers and context switching, on the task partitions tend to be overlooked. Second, the evaluation is typically restricted to either a global or partitioned scheduler, thereby by falling to compare the performance of cache partitioning when tasks are scheduled by different schedulers. Furthermore, recent works have confirmed that OS implementation aspects, such as the choice of scheduling data structures and interrupt handling mechanisms, impact real-time schedulability as much as scheduling theoretic aspects. However, these studies also used real-time patches applied into GPOSes, which affects the run-time overhead observed in these works and consequently the schedulability of real-time tasks. Additionally, current multicore scheduling algorithms do not consider scenarios where real-time tasks access the same cache lines due to true or false sharing, which also impacts the WCET. This thesis addresses these aforementioned problems with cache partitioning techniques and multicore real-time scheduling algorithms as following. First, a real-time multicore support is designed and implemented on top of an embedded operating system designed from scratch. This support consists of several multicore real-time scheduling algorithms, such as global and partitioned EDF, and a cache partitioning mechanism based on page coloring. Second, it is presented a comparison in terms of schedulability ratio considering the run-time overhead of the implemented RTOS and a GPOS patched with real-time extensions. In some cases, Global-EDF considering the overhead of the RTOS is superior to Partitioned-EDF considering the overhead of the patched GPOS, which clearly shows how different OSs impact hard realtime schedulers. Third, an evaluation of the cache partitioning impacton partitioned, clustered, and global real-time schedulers is performed.The results indicate that a lightweight RTOS does not impact real-time tasks, and shared cache partitioning has different behavior depending on the scheduler and the task's working set size. Fourth, a task partitioning algorithm that assigns tasks to cores respecting their usage of cache partitions is proposed. The results show that by simply assigning tasks that shared cache partitions to the same processor, it is possible to reduce the contention for shared cache lines and to provideHRT guarantees. Finally, a two-phase multicore scheduler that provides HRT and soft real-time (SRT) guarantees is proposed. It is shown that by using information from hardware performance counters at run-time, the RTOS can detect when best-effort tasks interfere with real-time tasks in the shared cache. Then, the RTOS can prevent best effort tasks from interfering with real-time tasks. The results also show that the assignment of exclusive partitions to HRT tasks together with the two-phase multicore scheduler provides HRT and SRT guarantees, even when best-effort tasks share partitions with real-time tasks.
308

Simulador para análise de desempenho de políticas de gerenciamento de servidores web cache

Paes, Edson Roberto Souza January 2003 (has links)
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciência da Computação. / Made available in DSpace on 2012-10-20T15:50:14Z (GMT). No. of bitstreams: 1 197462.pdf: 1505117 bytes, checksum: 6ba7673895fee0401b83d1288946f91f (MD5) / Os servidores Cache Web são utilizados como alternativa nas empresas em que grande parte dos usuários requisitam as mesmas informações em um determinado período de tempo. Normalmente, mantêm-se uma cópia local dos objetos requisitados neste servidor, para que nas próximas requisições deste mesmo objeto, não haja a necessidade de requisitá-lo novamente. Com o aumento das requisições, aumenta o número de novos objetos a serem armazenados. A medida em que o espaço em disco for diminuindo, há a necessidade de implementar neste servidor, políticas de substituição de arquivos que mantenham a organização e manutenção do cache. De acordo com a política escolhida haverá uma maior ou menor quantidade de arquivos encontrados em requisições futuras. Neste trabalho é desenvolvido um simulador Web Cache, que a partir de log's gerados por uma empresa específica, determina-se a melhor política de substituição de arquivos a ser implantada neste local. Neste protótipo foi implementado três políticas: SIZE, LRU e LFU, que são as políticas mais utilizadas nestes servidores.
309

Increasing embedded software radiation reliability through cache memories

Santini, Thiago Caberlon January 2015 (has links)
Memórias cache são tradicionalmente desabilitadas em aplicações espaciais e críticas porque acredita-se que a área sensível por elas introduzida comprometeria a confiabilidade do sistema. Conforme a tecnologia tem evoluído, a diferença de velocidade entre lógica e memória principal tem aumentado de tal maneira que desabilitando as caches a execução do código é retardada muito mais do que no passado. Como resultado, o processador fica exposto por um tempo muito maior para computar a mesma cargade trabalho. Neste trabalho nós demonstraremos que, em processadores embarcados modernos, habilitar as caches pode trazer benefícios para sistemas críticos: a área exposta maior pode ser compensada pelo tempo de exposição mais curto, levando a uma melhora total na confiabilidade. Nós propomos uma métrica intuitiva e um modelo matemático para avaliar a confiabilidade de um sistema em termos espaciais (i.e., área sensível à radiação) e temporais (i.e., desempenho), e provamos que minimizar a área sensível à radiação não necessariamente maximiza a confiabilidade da aplicação. A métrica e o modelo propostos são experimentalmente validados através de uma campanha extensiva de testes de radiação utilizando um Sistema-em-Chip de prateleira fabricado em 28nm baseado em processadores ARM como estudo de caso. Os resultados experimentais demonstram que, durante a execução da aplicação estudada à altitude de aeronave militar, a probabilidade de executar a carga de trabalho de uma missão de dois anos sem falhas é aumentada em 5.85% se as caches L1 são habilitadas (deste modo, aumentado a área sensível à radiação), quando comparada com nenhum nível de cache habilitado. Entretanto, se ambos níveis L1 e L2 são habilitados a probabilidade é diminuída em 31.59%. / Cache memories are traditionally disabled in space-level and safety-critical applications since it is believed that the sensitive area they introduce would compromise the system reliability. As the technology has evolved, the speed gap between logic and main memory has increased in such a way that disabling caches slows the code much more than in the past. As a result, the processor is exposed for a much longer time in order to compute the same workload. In this work we demonstrate that, on modern embedded processors, enabling caches may bring benefits to critical systems: the larger exposed area may be compensated by the shorter exposure time, leading to an overall improved reliability. We propose an intuitive metric and a mathematical model to evaluate system reliability in spatial (i.e., radiation-sensitive area) and temporal (i.e., performance) terms, and prove that minimizing radiation-sensitive area does not necessarily maximize application reliability. The proposed metric and model are experimentally validated through an extensive radiation test campaign using a 28nm off-the-shelf ARM-based Systemon- Chip as a case study. The experimental results demonstrate that, while executing the considered application at military aircraft altitude, the probability of executing a two-year mission workload without failures is increased by 5.85% if L1 caches are enabled (thus, increasing the radiation-sensitive area), when compared to no cache level being enabled. However, if both L1 and L2 caches are enabled the probability is decreased by 31.59%.
310

Proposta para alocação de canais e para comunicação cooperativa em redes Ad Hoc

Neves, Thiago Fernandes 22 January 2014 (has links)
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciências da Computação, 2014 / Submitted by Ana Cristina Barbosa da Silva (annabds@hotmail.com) on 2014-10-29T19:27:06Z No. of bitstreams: 1 2014_ThiagoFernandesNeves.pdf: 2706762 bytes, checksum: 5d8863d6f2c3cb249a566646d589bed3 (MD5) / Approved for entry into archive by Tania Milca Carvalho Malheiros(tania@bce.unb.br) on 2014-10-30T14:57:09Z (GMT) No. of bitstreams: 1 2014_ThiagoFernandesNeves.pdf: 2706762 bytes, checksum: 5d8863d6f2c3cb249a566646d589bed3 (MD5) / Made available in DSpace on 2014-10-30T14:57:10Z (GMT). No. of bitstreams: 1 2014_ThiagoFernandesNeves.pdf: 2706762 bytes, checksum: 5d8863d6f2c3cb249a566646d589bed3 (MD5) / A popularização de tecnologias sem fio, aliado com aplicações que exigem conexão contínua e altas taxas de transmissão, impulsionam o desenvolvimento de protocolos de Controle de Acesso ao Meio (do Inglês, Medium Access Control - MAC) eficientes em energia. Mecanismos que permitem melhorar o desempenho da rede utilizando a disponibilidade de múltiplos canais de comunicação têm sido explorados na literatura. No entanto, desenvolver protocolos eficientes em energia que permitam realizar a atribuição de canais e agendamento de comunicação, melhorando o desempenho da rede, tem sido uma tarefa desafiadora. Neste contexto, a primeira parte dessa dissertação propõe um protocolo de alocação de canais e de agendamento de comunicação para redes sem fio, chamado EEMC-MAC, que permite reduzir o consumo de energia e o tempo de comunicação. A segunda parte dessa dissertação possui seu foco em mecanismos para melhorar a conectividade em redes ad hoc. Nesse contexto, Comunicação Cooperativa (CC) é utilizada para explorar a diversidade espacial na camada física e permitir que múltiplos nós cooperem na transmissão de um sinal para um mesmo receptor. Uma vez que CC pode reduzir a potência de transmissão e estender o raio de transmissão, a técnica tem sido combinada com protocolos de controle de topologia em redes ad hoc. Os primeiros trabalhos de controle de topologia em redes ad hoc cooperativas buscam aumentar a conectividade da rede, enquanto o consumo de energia é minimizado em cada nó. Trabalhos posteriores focam na eficiência das rotas criadas na topologia final. No entanto, a nosso conhecimento, nenhum trabalho até então explorou CC para aumentar a conectividade com o sorvedouro em redes ad hoc. Na segunda parte dessa dissertação, é proposta uma nova técnica, chamada CoopSink, que utiliza CC e controle de topologia em redes ad hoc para aumentar a conectividade com um nó sorvedouro, além de garantir a eficiência das rotas para o sorvedouro. ________________________________________________________________________________ ABSTRACT / The popularization of wireless technology allied with high throughput and continuousInternet access applications has boosted the development of energy efficient Medium AccessControl (MAC) protocols. Mechanisms to improve network performance using theavailability of multiple communication channels have been explored in the literature. However,the development of energy efficient protocols to perform channel allocation and datascheduling to improve the network performance is a challenging task. In this context, thefirst part of this dissertation proposes a protocol, named EEMC-MAC, for multi-channelallocation and data scheduling for wireless networks that allows the reduction of energyconsumption and communication time. The second part of this dissertation focuses ontechniques to improve connectivity in ad hoc networks. In this context, CooperativeCommunication (CC) is employed to explore spatial diversity in the physical layer, allowingmultiple nodes to cooperatively relay signals to the receiver so that it can combinethe received signals to obtain the original message. Once CC can be used to reduce thepower of the transmission node and extend the transmission range, the technique hasbeen combined with topology control protocols in wireless ad hoc networks. Early worksin topology control in cooperative ad hoc networks aimed to increase network connectivitywhile minimizing energy consumption in each node. Later works focused in routeefficiency in the final topology. Nevertheless, to the best of our knowledge, no work sofar explored CC to increase connectivity to a sink node in wireless networks. As a secondcontribution of this work, a new technique named CoopSink is proposed, that uses CCand topology control in ad hoc networks to increase connectivity to a sink node, whileensuring efficient routes.

Page generated in 0.4672 seconds