• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 286
  • 286
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Evaluation Techniques for Mapping IPs on FPGAs

Lakshminarayana, Avinash 01 September 2010 (has links)
The phenomenal density growth in semiconductors has resulted in the availability of billions of transistors on a single die. The time-to-design is shrinking continuously due to aggressive competition. Also, the integration of many discrete components on a single chip is growing at a rapid pace. Designing such heterogeneous systems in short duration is becoming difficult with existing technology. Field-Programmable Gate Arrays offer a good alternative in both productivity and heterogeneity issues. However, there are many obstacles that need to be addressed to make them a viable option. One such obstacle is the lack of early design space exploration tools and techniques for FPGA designs. This thesis develops techniques to evaluate systematically, the available design options before the actual system implementation. The aspect which makes this problem interesting, yet complicated, is that a system-level optimization is not linearly summable. The discrete components of a system, benchmarked as best in all design parameters — speed, area and power, need not add up to the best possible system. This work addresses the problem in two ways. In the first approach, we demonstrate that by working at higher levels of abstraction, one can achieve orders of improvement in productivity. Designing a system directly from its behavioral description is an on-going effort in industry. Instead of focusing on design aspects, we use these methods to develop quick prototypes and estimate the design parameters. Design space exploration needs relative comparison among available choices and not accurate values of design parameters. It is shown that the proposed method can do an acceptable job in this regard. The second approach is about evolving statistical techniques for estimating the design parameters and then algorithmically searching the design space. Specifically, a high level power estimation model is developed for FPGA designs. While existing techniques develop power model for discrete components separately, this work evaluates the option of generic power model for multiple components. / Master of Science
112

Efficient data structures for discovery in high level architecture (HLA)

Rahmani, Hibah 01 January 2000 (has links)
The High Level Architecture (HLA) is a prototype architecture for constructing distributed simulations. HLA is a standard adopted by the Department of Defense (DOD) for development of simulation environments. An important goal of the HLA is to reduce the amount of data routing between simulations during run-time. The Runtime Infrastructure (RTI) is an operating system that is responsible for data routing between the simulations in HLA. The data routing service is provided by the Data Distribution Manager of the RTI. Several methods have been proposed and used for the implementation of data distribution services. The grid-based filtering method, the interval tree method, and the quad-tree method are examples. This thesis analyzes and compares two such methods: the grid and the quad-tree, in regards to their use in the discovery of intersections of publications and subscriptions. The number of false positives and the CPU time of each method are determined for typical cases. For most cases, the quad-tree methos produces less false positives. This method is best suited for large simulations where the cost of maintaining false positives, or non-relevant entities, may be prohibitive. For most cases, the grid method is faster than the quad-tree method. This method may be better suited for small simulations where the host has the capacity to accommodate false positives. The results of this thesis can be used to decide which of the two methods is better suited to a particular type of simulation exercise.
113

An Ethnography of the Self-Determination of Students with Disabilities when Participating in High-Level Mathematics Tasks in an Inclusive Classroom

DelliBovi, Diane M 01 January 2024 (has links) (PDF)
The goal of this ethnographic research study was to illuminate and analyze how one group of students, some identified with disabilities, experienced learning in an inclusive mathematics classroom. The data collection took place in one second-grade general education classroom. An interactional ethnographic approach was used to analyze the motivations of students with disabilities for participating when presented with high-level mathematics tasks within the classroom, as indicated by the Instruction Quality Assessment toolkit. I used Self-Determination Theory as a lens to analyze how students’ autonomy, competence, and relatedness impacted their participation. From informants' self-reported perceptions of self-determination through an initially adapted Basic Psychological Needs Assessment survey, I focused my observations on what might impact student motivation or willingness to participate. By uncovering what collaboration and participation looked like in this classroom, as described by key informants and evidenced through their discursive actions, I used a domain and taxonomic analysis to construct and organize my findings. Two taxonomies were constructed, ways to get help and ways to give help, based on informants’ constructions of 1) how to collaborate within their group and 2) how to communicate during a mathematical disagreement. The analysis disclosed two ways of getting help and six ways of giving help. Participation was consistent during high-level or low-level tasks presented by the teacher during collaborative time. The findings revealed that informants preferred the role of giving help and often refused help from group members. There are two major conclusions of this work: 1) There is a need for analysis of student discourse during mathematical disagreements, as students get, give, and refuse help; and 2) Perspectives of students with disabilities in inclusive mathematics classrooms should continue to be explored with efforts to promote mathematical agency in all learners.
114

Verificação de Projetos de Sistemas Embarcados através de Cossimulação Hardware/Software

Silva Junior, José Cláudio Vieira e 17 August 2015 (has links)
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2016-02-16T14:54:49Z No. of bitstreams: 1 arquivovotal.pdf: 4473573 bytes, checksum: 152c2f0d263c50dcbea7d500d5f7f5da (MD5) / Made available in DSpace on 2016-02-16T14:54:49Z (GMT). No. of bitstreams: 1 arquivovotal.pdf: 4473573 bytes, checksum: 152c2f0d263c50dcbea7d500d5f7f5da (MD5) Previous issue date: 2015-08-17 / Este trabalho propõe um ambiente para verificação de sistemas embarcados heterogêneos através da cossimulação distribuída. A verificação ocorre de maneira síncrona entre o software do sistema e o sistema embarcado usando a High Level Architecture (HLA) como middeware. A novidade desta abordagem não é apenas fornecer suporte para simulações, mas também permitir a integração sincronizada com todos os dispositivos de hardware físico. Neste trabalho foi utilizado o Ptolemy como uma plataforma de simulação. A integração do HLA com Ptolemy e os modelos de hardware abre um vasto conjunto de aplicações, como o de teste de vários dispositivos ao mesmo tempo, executando os mesmos, ou diferentes aplicativos ou módulos, a execução de multiplos dispositivos embarcados para a melhoria de performance. Além disso a abordagem de utilização do HLA, permite que sejam interligados ao ambiente, qualquer tipo de robô, assim como qualquer outro simulador diferente do Ptolemy. Estudo de casos são apresentado para provar o conceito, mostrando a integração bem sucedida entre o Ptolemy e o HLA e a verificação de sistemas utilizando Hardware-in-the-loop e Robot-in-the-loop. / This work proposes an environment for verification of heterogeneous embedded systems through distributed co-simulation. The verification occurs in real-time co-simulating the system software and hardware platform using the High Level Architecture (HLA) as a middleware. The novelty of this approach is not only providing support for simulations, but also allowing the synchronous integration with any physical hardware devices. In this work we use the Ptolemy framework as a simulation platform. The integration of HLA with Ptolemy and the hardware models open a vast set of applications, like the test of many devices at the same time, running the same, or different applications or modules, the usage of Ptolemy for real-time control of embedded systems and the distributed execution of different embedded devices for performance improvement. Furthermore the use of HLA approach allows them to be connected to the environment, any type of robot, as well as any other Ptolemy simulations. Case studies are presented to prove the concept, showing the successful integration between Ptolemy and the HLA and verification systems using Hardware-in-the-loop and Robot-in-the-loop.
115

Estudo de resiliência em comunicação entre sistemas Multirrobôs utilizando HLA

Simão, Rivaldo do Ramos 04 March 2016 (has links)
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-21T12:31:22Z No. of bitstreams: 1 arquivototal.pdf: 1454502 bytes, checksum: a03fd3df4d29b47b79ab2d8b0d3d9625 (MD5) / Made available in DSpace on 2017-08-21T12:31:22Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1454502 bytes, checksum: a03fd3df4d29b47b79ab2d8b0d3d9625 (MD5) Previous issue date: 2016-03-04 / Cooperation in a multi robot system has become a challenge to be overcome and turned into one of the biggest incentives for researchers in this area because communication appears as one of the most important requirements. This study aims to investigate the feasibility of using the distributed simulation environment called HLA (High-level Architecture) in the process of communication between members of a system with three and five computers. It simulates a multi-robot system in order to verify its behavior when one of them is replaced for another with limited processing power. Thus, a new communication approach based on HLA middleware was developed. In this approach, the robots adapt their transmission rate according to the performance of other robots. The accomplished experiments have shown that the real-time requirements of a robot soccer application have been achieved using this approach. It points to a new possibility of real-time communication between robots. On the exposed, in one experiment, a direct comparison was made between RTDB (Real-time database) middleware and the approach presented. It was verified that, in some contexts, the adaptive HLA is about 5 to 12 percent more efficient than RTDB. / A cooperação em um um sistema mutirrobôs tem se tornado um desafio a ser superado e se transformado em um dos maiores incentivos para os pesquisadores desta área, pois a comunicação se apresenta como um dos mais importantes requisitos. O objetivo deste trabalho foi de investigar a viabilidade do uso do ambiente de simulação distribuída chamado de HLA, no processo de comunicação entre membros de um sistema com três e cinco computadores, simulando um sistema multirrobôs, de modo a verificar seu comportamento, quando um deles é substituído por outro com poder de processamento reduzido. Assim, uma nova abordagem de comunicação com base no middleware HLA foi desenvolvida. Nessa nova abordagem, os robôs adaptam sua taxa de transmissão com base no desempenho de outros robôs. Experimentos demonstraram que os requisitos de tempo real de uma aplicação de futebol de robôs foram alcançados usando-se essa abordagem, o que aponta para uma nova possibilidade de comunicação em tempo real entre robôs. Diante do exposto, em um dos experimentos, foi feita uma comparação direta entre o middleware RTDB e a abordagem apresentada. Constatou-se que o HLA adaptativo, em alguns cenários, é mais eficiente entre 5% e 12% do que o RTDB.
116

Desenvolvimento e Avaliação de Simulação Distribuída para Projeto de Sistemas Embarcados com Ptolemy

Negreiros, ângelo Lemos Vidal de 29 January 2014 (has links)
Made available in DSpace on 2015-05-14T12:36:43Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3740448 bytes, checksum: df44ddc74f1029976a1e1beb1c698bf6 (MD5) Previous issue date: 2014-01-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nowadays, embedded systems have a huge amount of computational power and consequently, high complexity. It is quite usual to find different applications being executed in embedded systems. Embedded system design demands for method and tools that allow the simulation and verification in an efficient and practical way. This paper proposes the development and evaluation of a solution for embedded modeling and simulation of heterogeneous Models of Computation in a distributed way by the integration of Ptolemy II and the High Level Architecture (HLA), a middleware for distributed discrete event simulation, in order to create an environment with high-performance execution of large-scale heterogeneous models. Experimental results demonstrated that the use of a non distributed simulation for some situations as well as the use of distributed simulation with few machines, like one, two or three computers can be infeasible. It was also demonstrated the feasibility of the integration of both technologies and so the advantages in its usage in many different scenarios. This conclusion was possible because the experiments captured some data during the simulation: execution time, exchanged data and CPU usage. One of the experiments demonstrated that a speedup of factor 4 was acquired when a model with 4,000 thousands actors were distributed in 8 different machines inside an experiment that used up to 16 machines. Furthermore, experiments have also shown that the use of HLA presents great advantages in fact, although with certain limitations. / Atualmente, sistemas embarcados têm apresentado grande poder computacional e consequentemente, alta complexidade. É comum encontrar diferentes aplicações sendo executadas em sistemas embarcados. O projeto de sistemas embarcados demanda métodos e ferramentas que possibilitem a simulação e a verificação de um modo eficiente e prático. Este trabalho propõe o desenvolvimento e a avaliação de uma solução para a modelagem e simulação de sistemas embarcados heterogêneos de forma distribuída, através da integração do Ptolemy II com o High Level Architecture (HLA), em que o último é um middleware para simulação de eventos discretos distribuídos. O intuito dessa solução é criar um ambiente com alto desempenho que possibilite a execução em larga escala de modelos heterogêneos. Os resultados dos experimentos demonstraram que o uso da simulação não distribuída para algumas situações assim como o uso da simulação distribuída utilizando poucas máquinas, como, uma, duas ou três podem ser inviável. Demonstrou-se também a viabilidade da integração das duas tecnologias, além de vantagens no seu uso em diversos cenários de simulação, através da realização de diversos experimentos que capturavam dados como: tempo de execução, dados trocados na rede e uso da CPU. Em um dos experimentos realizados consegue-se obter o speedup de fator quatro quando o modelo com quatro mil atores foi distribuído em oito diferentes computadores, em um experimento que utilizava até 16 máquinas distintas. Além disso, os experimentos também demonstraram que o uso do HLA apresenta grandes vantagens, de fato, porém com certas limitações.
117

Generation of Application Specific Hardware Extensions for Hybrid Architectures: The Development of PIRANHA - A GCC Plugin for High-Level-Synthesis

Hempel, Gerald 11 November 2019 (has links)
Architectures combining a field programmable gate array (FPGA) and a general-purpose processor on a single chip became increasingly popular in recent years. On the one hand, such hybrid architectures facilitate the use of application specific hardware accelerators that improve the performance of the software on the host processor. On the other hand, it obliges system designers to handle the whole process of hardware/software co-design. The complexity of this process is still one of the main reasons, that hinders the widespread use of hybrid architectures. Thus, an automated process that aids programmers with the hardware/software partitioning and the generation of application specific accelerators is an important issue. The method presented in this thesis neither requires restrictions of the used high-level-language nor special source code annotations. Usually, this is an entry barrier for programmers without deeper understanding of the underlying hardware platform. This thesis introduces a seamless programming flow that allows generating hardware accelerators for unrestricted, legacy C code. The implementation consists of a GCC plugin that automatically identifies application hot-spots and generates hardware accelerators accordingly. Apart from the accelerator implementation in a hardware description language, the compiler plugin provides the generation of a host processor interfaces and, if necessary, a prototypical integration with the host operating system. An evaluation with typical embedded applications shows general benefits of the approach, but also reveals limiting factors that hamper possible performance improvements.
118

Numerical analysis of thermo-hydro-mechanical (THM) processes in the clay based material

Wang, Xuerui 06 October 2016 (has links)
Clay formations are investigated worldwide as potential host rock for the deep geological disposal of high-level radioactive waste (HLW). Usually bentonite is preferred as the buffer and backfill material in the disposal system. In the disposal of HLW, heat emission is one of the most important issues as it can generate a series of complex thermo-hydro-mechanical (THM) processes in the surrounding materials and thus change the material properties. In the context of safety assessment, it is important to understand the thermally induced THM interactions and the associated change in material properties. In this work, the thermally induced coupled THM behaviours in the clay host rock and in the bentonite buffer as well as the corresponding coupling effects among the relevant material properties are numerically analysed. A coupled non-isothermal Richards flow mechanical model and a non-isothermal multiphase flow model were developed based on the scientific computer codes OpenGeoSys (OGS). Heat transfer in the porous media is governed by thermal conduction and advective flow of the pore fluids. Within the hydraulic processes, evaporation, vapour diffusion, and the unsaturated flow field are considered. Darcy’s law is used to describe the advective flux of gas and liquid phases. The relative permeability of each phase is considered. The elastic deformation process is modelled by the generalized Hooke’s law complemented with additional strain caused by swelling/shrinkage behaviour and by temperature change. In this study, special attention has been paid to the analysis of the thermally induced changes in material properties. The strong mechanical and hydraulic anisotropic properties of clay rock are described by a transversely isotropic mechanical model and by a transversely isotropic permeability tensor, respectively. The thermal anisotropy is described by adoption of the bedding-orientation-dependent thermal conductivity. The dependency of the thermal conductivity on the degree of water saturation, the dependency of the thermal effects on the water retention behaviour, and the dependency of the effects of the pore pressure variation on the permeability and the anisotropic swelling/shrinkage behaviour have been intensively analysed and the corresponding numerical models to consider those coupling effects have been developed. The developed numerical model has been applied to simulate the laboratory and in situ heating experiments on the bentonite and clay rock at different scales. Firstly the laboratory heating experiment on Callovo-Oxfordian Clay (COX) and the laboratory long-term heating and hydration experiment on MX80 pellets were simulated. Based on the knowledge from the numerical analysis of the laboratory experiments, a 1:2 scale in situ heating experiment of an integrated system of the bentonite engineered barrier system (EBS) in the Opalinus Clay host rock was simulated. All the relevant operation phases were considered in the modelling. Besides, the modelling was extended to 50 years after the heat shut-down with the aim of predicting the long-term behaviours. Additionally, variation calculations were carried out to investigate the effects of the storage capacity of the Opalinus Clay on the thermally induced hydraulic response. In the long-term modelling, the effects of different saturated water permeabilities of buffer material on the resaturation process were analysed. Based on the current researches and model developments, the observed THM behaviours of the bentonite buffer and the clay rock, that is, the measured evolution of temperature, pore pressure, humidity, swelling pressure, and so on in the laboratory and in situ experiments can be reproduced and interpreted well. It is proved that by using both a non-isothermal multiphase flow model and a non-isothermal Richards flow model combined with the corresponding thermal and mechanical models, the major THM behaviours can be captured. It is validated that the developed model is able to simulate the relevant coupled THM behaviours of clayey material under the well-defined laboratory conditions as well as under the complex natural disposal conditions.
119

Test vid utveckling av IT- system : En studie om metoder och arbetssätt för low-level test / Test in Development of IT systems : A study of methods and procedures for low-level test

Vega Ledezma, Madeleine, Arslan, Murat-Emre January 2014 (has links)
Test av informationssystem är en viktig del inom systemutvecklingsprocessen för att minimera felaktigheter och förbättra tillförlitligheten av system. Trafikverkets IT enhet hade ett fastställt och strukturerat testarbete för high-level test däremot hade de inte ett fastställt strukturerat testarbete inom low-level test. Vi fick i uppdrag att undersöka metoder och arbetssätt som fanns inom low-level test. Vi skulle också jämföra system som genomgått ett strukturerat testarbete inom low- och high-level test mot system som genomgått ostrukturerat low-level test och strukturerat high-level test. Målet med examensarbetet var att föreslå lämpliga metoder och arbetsätt inom low-level test för Trafikverkets IT enhet. Målet var också att ge en rekommendation ifall ett strukturerat testarbete inom low- och high-level var att rekommendera i jämförelse mot system som genomgått ostrukturerat low-level test och strukturerat high-level test. Genom litterära studier och intervjuer med Trafikverkets resurser genomförde vi vår undersökning och kom fram till vårt resultat.Vår rekommendation för Trafikverket IT var att de ska använda sig utav testdriven utveckling eftersom utvecklarna var osäkra på vad som skulle testas och metoden skulle klargöra detta. Dessutom ville de ha valmöjligheter och riktlinjer som skulle ge dem en mer bestämd arbetsstruktur. Vi rekommenderar också en anpassning av Self-Governance ramverket där aktiviteter väljs ut för varje projekt av en metodansvarig eller projektansvarig (Scrum Master) som bestämmer vilka aktiviteter som ska utföras på individ- och gruppnivå. / Testing of information systems is an essential part of the system development process to minimize errors and improve the reliability of systems. Trafikverket IT unit had a structured testing in the test phase high-level, however, they had not a structured testing in the development phase, low-level tests. We were assigned to examine methods and working methods in low-level test. We also would compare systems that had undergone a structuredtesting in low-and high-level test against systems that had undergone an unstructured low-leveltest and structured high-level test.The goal of the thesis was to propose appropriate method/methods in low-level test for Trafikverket IT unit. The goal was also to make a recommendation if a structured testing in low-and high-level were to be recommended in comparison with systems that had undergone unstructured low-level test and structured high-level test. Through literary studies and interviews with Trafikverket employees we reached our result. Our recommendation for Trafikverket IT is that they should use test-driven development because developers were unsure of what should be tested and the method would make thisclear. The developers also wanted to have options and guidelines that would give them a definite work structure. We also recommend an adaptation of the Self-Governance frameworkfrom where activities can be selected from each project manager (Scrum Master) that determines which activities will be performed in individual- and group level for each project.
120

Parallel Hardware- and Software Threads in a Dynamically Reconfigurable System on a Programmable Chip

Rößler, Marko 06 December 2013 (has links)
Today’s embedded systems depend on the availability of hybrid platforms, that contain heterogeneous computing resources such as programmable processors units (CPU’s or DSP’s) and highly specialized hardware cores. These platforms have been scaled down to integrated embedded system-on-chip. Modern platform FPGAs enhance such systems by the flexibility of runtime configurable silicon. One of the major advantages that arises is the ability to use hardware (HW) and software (SW) resources in a time-shared manner. Though the ability to dynamically assign computing resources based on decisions taken at runtime is given.

Page generated in 0.0388 seconds