• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • Tagged with
  • 11
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Core level thermal estimation techniques for early design space exploration

Gandhi, Darshan Dhimantkumar 18 September 2014 (has links)
The primary objective of this thesis is to develop a methodology for fast, yet accurate temperature estimation during design space exploration. Power and temperature of modern day systems have become important metrics in addition to performance. Static and dynamic power dissipation leads to an increase in temperature, which creates cooling and packaging issues. Furthermore, the transient thermal profile determines temperature gradients, hotspots and thermal cycles. Traditional solutions rely on cycle-accurate simulations of detailed micro-architectural structures and are slow. The thesis shows that the periodic power estimation is the key bottleneck in such approaches. It also demonstrates an approach (FastSpot) that integrates accurate thermal estimation into existing host-compiled simulations. The developed methodology can incorporate different sampling-based thermal models. It achieves a 32000x increase in simulation throughput for temperature trace generation, while incurring low measurement errors (0.06 K- transient,0.014 K- steady-state) compared to a cycle-accurate reference method. / text
2

Dynamic time management for improved accuracy and speed in host-compiled multi-core platform models

Razaghi, Parisa 07 July 2014 (has links)
With increasing complexity and software content, modern embedded platforms employ a heterogeneous mix of multi-core processors along with hardware accelerators in order to provide high performance in limited power budgets. Due to complex interactions and highly dynamic behavior, static analysis of real-time performance and other constraints is challenging. As an alternative, full-system simulations have been widely accepted by designers. With traditional approaches being either slow or inaccurate, so-called host-compiled simulators have recently emerged as a solution for rapid evaluation of complete systems at early design stages. In such approaches, a faster simulation is achieved by natively executing application code at the source level, abstracting execution behavior of target platforms, and thus increasing simulation granularity. However, most existing host-compiled simulators often focus on application behavior only while neglecting effects of hardware/software interactions and associated speed and accuracy tradeoffs in platform modeling. In this dissertation, we focus on host-compiled operating system (OS) and processor modeling techniques, and we introduce novel dynamic timing model management approaches that efficiently improve both accuracy and speed of such models via automatically calibrating the simulation granularity. The contributions of this dissertation are twofold: We first establish an infrastructure for efficient host-compiled multi-core platform simulation by developing (a) abstract models of both real-time OSs and processors that replicate timing-accurate hardware/software interactions and enable full-system co-simulation, and (b) quantitative and analytical studies of host-compiled simulation principles to analyze error bounds and investigate possible improvements. Building on this infrastructure, we further propose specific techniques for improving accuracy and speed tradeoffs in host-compiled simulation by developing (c) an automatic timing granularity adjustment technique based on dynamically observing system state to control the simulation, (d) an out-of-order cache hierarchy modeling approach to efficiently reorder memory access behavior in the presence of temporal decoupling, and (e) a synchronized timing model to align platform threads to run efficiently in parallel simulation. Results as applied to industrial-strength platforms confirm that by providing careful abstractions and dynamic timing management, our models can achieve full-system simulations at equivalent speeds of more than a thousand MIPS with less than 3% timing error. Coupled with the capability to easily adjust simulation parameters and configurations, this demonstrates the benefits of our platform models for early application development and exploration. / text
3

On Decoupling Concurrency Control from Recovery in Database Repositories

Yu, Heng January 2005 (has links)
We report on initial research on the concurrency control issue of compiled database applications. Such applications have a repository style of architecture in which a collection of software modules operate on a common database in terms of a set of predefined transaction types, an architectural view that is useful for the deployment of database technology to embedded control programs. We focus on decoupling concurrency control from any functionality relating to recovery. Such decoupling facilitates the compile-time query optimization. <br /><br /> Because it is the possibility of transaction aborts for deadlock resolution that makes the recovery subsystem necessary, we choose the deadlock-free tree locking (TL) scheme for our purpose. With the knowledge of transaction workload, efficacious lock trees for runtime control can be determined at compile-time. We have designed compile-time algorithms to generate the lock tree and other relevant data structures, and runtime locking/unlocking algorithms based on such structures. We have further explored how to insert the lock steps into the transaction types at compile time. <br /><br /> To conduct our simulation experiments to evaluate the performance of TL, we have designed two workloads. The first one is from the OLTP benchmark TPC-C. The second is from the open-source operating system MINIX. Our experimental results show TL produces better throughput than the traditional two-phase locking (2PL) when the transactions are write-only; and for main-memory data, TL performs comparably to 2PL even in workloads with many reads.
4

On Decoupling Concurrency Control from Recovery in Database Repositories

Yu, Heng January 2005 (has links)
We report on initial research on the concurrency control issue of compiled database applications. Such applications have a repository style of architecture in which a collection of software modules operate on a common database in terms of a set of predefined transaction types, an architectural view that is useful for the deployment of database technology to embedded control programs. We focus on decoupling concurrency control from any functionality relating to recovery. Such decoupling facilitates the compile-time query optimization. <br /><br /> Because it is the possibility of transaction aborts for deadlock resolution that makes the recovery subsystem necessary, we choose the deadlock-free tree locking (TL) scheme for our purpose. With the knowledge of transaction workload, efficacious lock trees for runtime control can be determined at compile-time. We have designed compile-time algorithms to generate the lock tree and other relevant data structures, and runtime locking/unlocking algorithms based on such structures. We have further explored how to insert the lock steps into the transaction types at compile time. <br /><br /> To conduct our simulation experiments to evaluate the performance of TL, we have designed two workloads. The first one is from the OLTP benchmark TPC-C. The second is from the open-source operating system MINIX. Our experimental results show TL produces better throughput than the traditional two-phase locking (2PL) when the transactions are write-only; and for main-memory data, TL performs comparably to 2PL even in workloads with many reads.
5

Simulation fonctionnelle native pour des systèmes many-cœurs / Functional native simulation techniques for many-core systems

Sarrazin, Guillaume 23 May 2016 (has links)
Le nombre de transistors dans une puce augmente constamment en suivant la conjecture de Moore, qui dit que le nombre de transistors dans une puce double tous les 2 ans. On arrive donc aujourd’hui à des systèmes d’une telle complexité que l’exploration architecturale ou le développement, même parallèle, de la conception de la puce et du code applicatif prend trop de temps. Pour réduire ce temps, la solution généralement admise consiste à développer des plateformes virtuelles reproduisant le comportement de la puce cible. Avoir une haute vitesse de simulation est essentiel pour ces plateformes, notamment pour les systèmes many-cœurs à cause du grand nombre de cœurs à simuler. Nous nous focalisons donc dans cette thèse sur la simulation native, dont le principe est de compiler le code source directement pour l’architecture hôte, offrant ainsi un temps de simulation que l’on peut espérer optimal. Mais un certain nombre de caractéristiques fonctionnelles spécifiques au cœur cible peuvent ne pas être présentes sur le cœur hôte. L’utilisation de l’assistance matérielle à la virtualisation (HAV) comme base pour la simulation native vient renforcer la dépendance de la simulation du cœur cible par rapport aux caractéristiques du cœur hôte. Nous proposons dans ce contexte un moyen de simuler les caractéristiques fonctionnelles spécifiques du cœur cible en simulation native basée sur le HAV. Parmi les caractéristiques propres au cœur cible, l’unité de calcul à virgule flottante est un élément important, bien trop souvent négligé en simulation native conduisant certains calculs à donner des résultats différents entre le cœur cible et le cœur hôte. Nous nous restreignons au cas de la simulation compilée et nous proposons une méthodologie permettant de simuler correctement les opérations de calcul à virgule flottante. Finalement la simulation native pose des problèmes de passage à l’échelle. Des problèmes de découplage temporel amènent à simuler inutilement certaines instructions lors de procédures de synchronisation entre des tâches s’exécutant sur les cœurs cibles, conduisant à une réduction de la vitesse de simulation. Nous proposons des solutions pour permettre un meilleur passage à l’échelle de la simulation native. / The number of transistors in one chip is increasing following Moore’s conjecture which says that the number of transistors per chip doubles every two years. Current systems are so complex that chip design and specific software development for one chip take too much time even if software development is done in parallel with the design of the hardware architecture, often because of system integration issues. To help reducing this time, the general solution consists of using virtual platforms to reproduce the behavior of the target chip. The simulation speed of these platforms is a major issue, especially for many-core systems in which the number of programmable cores is really high. We focus in this thesis on native simulation. Its principle is to compile source code directly for the host architecture to allow very fast simulation, at the cost of requiring "equivalent" features on the target and host cores.However, some target core specific features can be missing in the host core. Hardware Assisted Virtualization (HAV) is used to ease native simulation but it reinforces the dependency of the target chip simulation regarding the host core capabilities. In this context, we propose a solution to simulate the target core functional specific features with HAV based native simulation.Among target core features, the floating point unit is an important element which is neglected in native simulation leading to potential functional differences between target and host computation results. We restrict our study to the compiled simulation technique and we propose a methodology ensuring to accurately simulate floating point computations while still keeping a good simulation speed.Finally, native simulation has a scalability issue. Time decoupling problems generate unnecessary code simulation during synchronisation protocols between threads executed on the target cores, leading to an important decrease of simulation speed when the number of cores grows. We address this problem and propose solutions to allow a better scalability for native simulation.
6

ExtractCFG : a framework to enable accurate timing back annotation of C language source code

Goswami, Arindam 30 September 2011 (has links)
The current trend in embedded systems design is to move the initial design and exploration phase to a higher level of abstraction, in order to tackle the rapidly increasing complexity of embedded systems. One approach of abstracting software development from the low level platform details is host- compiled simulation. Characteristics of the target platform are represented in a host-compiled simulation model by annotating the high level source code. Compiler optimizations make accurate annotation of the code a challenging task. In this thesis, we describe an approach to enable correct back-annotation of C code at the basic block level, while taking compiler optimizations into account. / text
7

Desenvolvimento de uma base de dados computacional para aplicação em Análise Probabilística de Segurança de reatores nucleares de pesquisa / Development of a computational database for application in Probabilistic Safety Analysis of nuclear research reactors

MACEDO, VAGNER dos S. 25 May 2017 (has links)
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2017-05-25T12:51:29Z No. of bitstreams: 0 / Made available in DSpace on 2017-05-25T12:51:29Z (GMT). No. of bitstreams: 0 / O objetivo deste trabalho é apresentar a base de dados que foi desenvolvida para armazenar dados técnicos e processar dados sobre operação, falha e manutenção de equipamentos dos reatores nucleares de pesquisa localizados no Instituto de Pesquisas Energéticas e Nucleares (IPEN), em São Paulo - SP. Os dados extraídos desta base poderão ser aplicados na Análise Probabilística de Segurança dos reatores de pesquisa ou em avaliações quantitativas menos complexas relacionadas à segurança, confiabilidade, disponibilidade e manutenibilidade destas instalações. Esta base de dados foi desenvolvida de modo a permitir que as informações nela contidas estejam disponíveis aos usuários da rede corporativa, que é a intranet do IPEN. Os profissionais interessados deverão ser devidamente cadastrados pelo administrador do sistema, para que possam efetuar a consulta e/ou o manuseio dos dados. O modelo lógico e físico da base de dados foi representado por um diagrama de entidades e relacionamento e está de acordo com os módulos de segurança instalados na intranet do IPEN. O sistema de gerenciamento da base de dados foi desenvolvido com o MySQL, o qual utiliza a linguagem SQL como interface. A linguagem de programação PHP foi usada para permitir o manuseio da base de dados pelo usuário. Ao final deste trabalho, foi gerado um sistema de gerenciamento de base de dados capaz de fornecer as informações de modo otimizado e com bom desempenho. / Dissertação (Mestrado em Tecnologia Nuclear) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
8

O comprometimento como estratégia para a adoção de um sistema de gestão ambiental: O caso de uma instituição pública de pesquisa. / Commitment as a strategy for the adoption of an environmental management system: The case of a public research institution

SILVA, MARIA C.C. da 08 November 2017 (has links)
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2017-11-08T16:58:17Z No. of bitstreams: 0 / Made available in DSpace on 2017-11-08T16:58:17Z (GMT). No. of bitstreams: 0 / Esta tese, utilizando-se do mapa cognitivo Strategic Options Development and Analysis - SODA, se propôs a atender o objetivo nela especificado, ou seja, avaliar a importância do comprometimento organizacional para a implementação de um Sistema de Gestão Ambiental SGA por parte dos quinze Gerentes de uma Instituição Pública Federal, localizada no estado de São Paulo. Os dados que compuserem o mapa cognitivo, foram obtidos por meio de entrevista face a face, no período de maio a novembro de 2015, e de reuniões grupais com os referidos Gerentes, no período de dezembro de 2015 a março de 2016. A utilização do método de estruturação de problemas - Problem Structuring Methods (PSMs) - mapa cognitivo - SODA possibilitou investigar, as possíveis incertezas, complexidades e conflitos, voltados para o elemento comprometimento, provenientes da adoção de uma gestão ambiental, por intermédio da implementação de um Sistema de Gestão Ambiental SGA. O resultado da análise do mapa cognitivo demonstrou a importância do comprometimento organizacional quando da intenção de se adotar um Sistema de Gestão Ambiental. Diferentemente do entendimento de Barbieri (2007), o presente estudo coloca a importância desse comportamento não somente por parte da alta direção, mas também por parte de toda a equipe a ser envolvida nas atividades concernentes a tal sistema. Permitiu também a construção de um modelo de mensuração do comprometimento em relação ao Sistema de Gestão Ambiental SGA. O comprometimento mensurado por meio de referido instrumento, está dividido em dois componentes: afetivo que tem o comprometimento como um apego, como um envolvimento, onde ocorre a identificação com a empresa, funcionários com forte comprometimento afetivo permanecem na empresa porque querem, e normativo o comprometimento como uma obrigação em permanecer na organização, funcionários identificados com esse comportamento permanecem na empresa porque sentem que tem essa obrigação. O modelo de mensuração do comprometimento neste estudo sugerido, a ser validado em estudos longitudinais, permitirá mapear elementos de forma que possam observar tendências. Referido instrumento não foi validado neste estudo, no entanto sugere-se que estudos futuros, por meio de estudos longitudinais procedam sua validação. / Tese (Doutorado em Tecnologia Nuclear) / IPEN/T / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
9

Evaluation of cross-platform development for mobile devices / Utvärdering av cross-platformutveckling för mobila enheter

Friberg, Joy January 2014 (has links)
Developing an application for several platforms can be time consuming because each platform has its own operating system and different developing language. Cross-platform development makes it possible to develop an ap-plication that will work on several platforms. This report will evaluate this kind of development by doing a case study for the company CGI. The case study will evaluate which cross-platform methodology is the preferred choice for this specific vacation booking application I developed for CGI. The different methodologies I studied were web, hybrid, interpreted and cross-compiled. The preferred methodology for this vacation booking application I developed was in this case the hybrid alternative. When selecting this methodology I also chose two different tools and those two were Icenium and jQuery Mobile. The purpose of this report was to find out if cross-platform development can be a substitute to native programming and by evaluating and developing cross-platform I found out that it can be a substitute if the application is not to complex. In this specific case I also believe that hybrid development is a good substitute to native development for this kind of applications.
10

Využití technologie Blazor s frameworkem DotVVM / Using Blazor technology with the DotVVM framework

Švikruha, Patrik January 2019 (has links)
DotVVM, WebAssembly, WASM, Blazor, ASP.NET Core, .NET Core, .NET, Mono, JavaScript, JavaScript engine, LLVM, AOT compiler, JIT compiler, WSL

Page generated in 0.0297 seconds