• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 143
  • 35
  • 22
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 238
  • 238
  • 233
  • 49
  • 44
  • 41
  • 39
  • 37
  • 37
  • 34
  • 32
  • 31
  • 31
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The System-on-a-Chip Lock Cache

Akgul, Bilge Ebru Saglam 12 April 2004 (has links)
In this dissertation, we implement efficient lock-based synchronization by a novel, high performance, simple and scalable hardware technique and associated software for a target shared-memory multiprocessor System-on-a-Chip (SoC). The custom hardware part of our solution is provided in the form of an intellectual property (IP) hardware unit which we call the SoC Lock Cache (SoCLC). SoCLC provides effective lock hand-off by reducing on-chip memory traffic and improving performance in terms of lock latency, lock delay and bandwidth consumption. The proposed solution is independent from the memory hierarchy, cache protocol and the processor architectures used in the SoC, which enables easily applicable implementations of the SoCLC (e.g., as a reconfigurable or partially/fully custom logic), and which distinguishes SoCLC from previous approaches. Furthermore, the SoCLC mechanism has been extended to support priority inheritance with an immediate priority ceiling protocol (IPCP) implemented in hardware, which enhances the hard real-time performance of the system. Our experimental results in a four-processor SoC indicate that SoCLC can achieve up to 37% overall speedup over spin-lock and up to 48% overall speedup over MCS for a microbenchmark with false sharing. The priority inheritance implemented as part of the SoCLC hardware, on the other hand, achieves 1.43X speedup in overall execution time of a robot application when compared to the priority inheritance implementation under the Atalanta real-time operating system. Furthermore, it has been shown that with the IPCP mechanism integrated into the SoCLC, all of the tasks of the robot application could meet their deadlines (e.g., a high priority task with 250us worst case response time could complete its execution in 93us with SoCLC, however the same task missed its deadline by completing its execution in 283us without SoCLC). Therefore, with IPCP support, our solution can provide better real-time guarantees for real-time systems. To automate SoCLC design, we have also developed an SoCLC-generator tool, PARLAK, that generates user specified configurations of a custom SoCLC. We used PARLAK to generate SoCLCs from a version for two processors with 32 lock variables occupying 2,520 gates up to a version for fourteen processors with 256 lock variables occupying 78,240 gates.
192

Design Space Exploration and Optimization of Embedded Memory Systems

Rabbah, Rodric Michel 11 July 2006 (has links)
Recent years have witnessed the emergence of microprocessors that are embedded within a plethora of devices used in everyday life. Embedded architectures are customized through a meticulous and time consuming design process to satisfy stringent constraints with respect to performance, area, power, and cost. In embedded systems, the cost of the memory hierarchy limits its ability to play as central a role. This is due to stringent constraints that fundamentally limit the physical size and complexity of the memory system. Ultimately, application developers and system engineers are charged with the heavy burden of reducing the memory requirements of an application. This thesis offers the intriguing possibility that compilers can play a significant role in the automatic design space exploration and optimization of embedded memory systems. This insight is founded upon a new analytical model and novel compiler optimizations that are specifically designed to increase the synergy between the processor and the memory system. The analytical models serve to characterize intrinsic program properties, quantify the impact of compiler optimizations on the memory systems, and provide deep insight into the trade-offs that affect memory system design.
193

High-performance computer system architectures for embedded computing

Lee, Dongwon 26 August 2011 (has links)
The main objective of this thesis is to propose new methods for designing high-performance embedded computer system architectures. To achieve the goal, three major components - multi-core processing elements (PEs), DRAM main memory systems, and on/off-chip interconnection networks - in multi-processor embedded systems are examined in each section respectively. The first section of this thesis presents architectural enhancements to graphics processing units (GPUs), one of the multi- or many-core PEs, for improving performance of embedded applications. An embedded application is first mapped onto GPUs to explore the design space, and then architectural enhancements to existing GPUs are proposed for improving throughput of the embedded application. The second section proposes high-performance buffer mapping methods, which exploit useful features of DRAM main memory systems, in DSP multi-processor systems. The memory wall problem becomes increasingly severe in multiprocessor environments because of communication and synchronization overheads. To alleviate the memory wall problem, this section exploits bank concurrency and page mode access of DRAM main memory systems for increasing the performance of multiprocessor DSP systems. The final section presents a network-centric Turbo decoder and network-centric FFT processors. In the era of multi-processor systems, an interconnection network is another performance bottleneck. To handle heavy communication traffic, this section applies a crossbar switch - one of the indirect networks - to the parallel Turbo decoder, and applies a mesh topology to the parallel FFT processors. When designing the mesh FFT processors, a very different approach is taken to improve performance; an optical fiber is used as a new interconnection medium.
194

Fault propagation analysis of large-scale, networked embedded systems

Pattnaik, Aliva 16 November 2011 (has links)
In safety-critical, networked embedded systems, it is important that the way in which a fault(s) in one component of the system can propagate throughout the system to other components is analyzed correctly. Many real-world systems, such as modern aircrafts and automobiles, use large-scale networked embedded systems with complex behavior. In this work, we have developed techniques and a software tool, FauPA, that uses those techniques to automate fault-propagation analysis of large-scale, networked embedded systems such as those used in modern aircraft. This work makes three main contributions. 1. Fault propagation analyses. We developed algorithms for two types of analyses: forward analysis and backward analysis. For backward analysis, we developed two techniques: a naive algorithm and an algorithm that uses Datalog. 2. A system description language. We developed a language that we call Communication System Markup Language (CSML) based on XML. A system can be specified concisely and at a high-level in CSML. 3. A GUI-based display of the system and analysis results. We developed a GUI to visualize the system that is specified in CSML. The GUI also lets the user visualize the results of fault-propagation analyses.
195

Distributed real-time processing using GNU/Linux/libré software and COTS hardware

Van Schalkwyk, Dirko 03 1900 (has links)
Thesis (MScIng)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: This dissertation's research aims at studying the viability of using both low cost consumer Commodity Off The Self (COTS) PCs and libn~software in implementing a distributed real-time system modeled on a real-world engineering problem. Debugging and developing a modular satellite system is both time consuming and complex, to this end the SUNSATteam has envisioned the Interactive Test System that would be a dual mode simulator/monitoring system. It is this system that requires a real-time back-end and is used as a real world problem model to implement. The implementation was accomplished by researching the available parallel processing software and real-time extensions to GNU/Linux and choosing the appropriate solutions based on the needs of the model. A monitoring system was also implemented, for system verification, using freely available system monitoring utilities. The model was successfully implemented and verified with a global synchronization of < 10ms. It was shown that GNU/Linux and libn~ software is both mature enough and appropriate in solving a real world distributed real-time problem. / AFRIKAANSE OPSOMMING: Die tesis se navorsing is daarop gemik om die toepaslikheid van beide lae koste verbruikers Komoduteits Van Die Rak (KVDR)persoonlike rekenaars en vemiet sagteware in die implementasie van verspreide intydse stelsels te ondersoek aan die hand van die oplossing van 'n gemodelleerde ingenieurs probleem. Die ontfouting en ontwikkeling van 'n modulere satelliet is beide tyd rowend en kompleks, om hierdie te vergemaklik het die SUNSAT span die Interaktiewe Toets Stelsel gekonseptualiseer, wat in wese'n dubbel modus simulator/moniteerings stelsel sou wees. Dit is hierdie stelsel wat 'n verspreide intydse onderstel benodig en dien as die regte wereld probleem model om op te los. Die implementasie is bereik deur die beskikbare verspreide verwerkings sagteware en intydse uitbreidings vir GNU/Linux te ondersoek en die toepaslike opsies te kies gegrond op die behoeftes van die model. 'n Moniteerings stelsel is ook geimplimenteer, met behulp van libn~sagteware, vir stelsel verifikasie. Die model was suksesvol geimplimenteer en geverifieer met 'n globale sinkronisasie van < 10ms. Daar is getoon dat GNU/Linux en libn~sagteware beide volwaardig en geskik is vir die oplossing van regte wereld verspreide intydse probleme.
196

Projeto e validação de software automotivo com o método de desenvolvimento baseado em modelos / Automotive software project and validation with model based design

Nunes, Lauro Roberto 07 July 2017 (has links)
Os veículos automotivos pesados possuem funcionalidades particulares e aplicação em ambiente agressivo. Para garantir melhores desempenho, segurança e confiabilidade aos equipamentos eletrônicos embarcados, é necessário o aperfeiçoamento dos métodos e processos de desenvolvimento de software embarcado automotivo. Considerando a metodologia de desenvolvimento baseada em modelos (MBD) como um método em ascensão na indústria automotiva, este trabalho pesquisa contribuições nas atividades de engenharia de requisitos, otimização e validação do software, de forma a comprovar a eficácia do método e ferramentas utilizadas na busca pela qualidade final do produto (veículo comercial pesado). A base do trabalho refere-se à aplicação dos conceitos de integração de requisitos à simulação (MIL - Model in the Loop), comparação da otimização do software gerado automaticamente entre ferramentas comuns (IDE’s) e as baseadas em modelo, validação e cobertura do software gerado e uma forma alternativa de aumento da cobertura do código testado. / The automotive heavy-duty vehicles have particular functionalities and aggressive environment application. To ensure better performance, safety and reliability to electronic embedded equipment, it is necessary to invest in methods and process improvements in automotive software development. Considering Model Based Design (MBD) as an ascending development method in automotive industry, this work looks towards contributions in requirements engineering, software optimization and validation, in order to prove the method and tools efficiency in the final product quality (heavy-duty vehicle). This work refers to the appliance of requirement engineering integration to the simulation (MIL - Model in the Loop), comparison between optimization in usual programming tools (IDE’s) and Model Based Design tools, validation and software code coverage, and an alternative way of increasing code coverage of a tested code.
197

Análise de requisitos temporais para sistemas embarcados automotivos / Timing analysis of automotive embedded systems

Acras, Mauro 14 December 2016 (has links)
Os sistemas embarcados automotivos são caracterizados por sistemas computacionais que suportam funcionalidades na forma de softwares embarcados para proporcionar aos usuários maior conforto, segurança e desempenho. Entretanto, existe uma grande quantidade de funções integradas que elevam o nível de complexidade de forma que se deve utilizar métodos e ferramentas de projetos adequados para garantir os requisitos funcionais e não funcionais do sistema. Todo o projeto de software embarcado automotivo deve iniciar com a definição de requisitos funcionais e de acordo com a dinâmica do subsistema que uma ECU (Electronic Control Unit) irá controlar e/ou gerenciar, deve-se ainda definir os requisitos temporais. Uma função automotiva pode ter requisitos temporais do tipo, período de ativação, atraso fim-a-fim, deadline entre outras que por sua vez estão estritamente relacionadas com as características da arquitetura de hardware utilizada. Em um sistema automotivo, tem-se uma arquitetura de computação embarcada distribuída em que existem tarefas e mensagens que trocam sinais entre si e podem ter requisitos temporais que devam ser atendidos. A análise temporal para verificação e validação dos requisitos temporais pode ser realizada ao nível de arquitetura distribuída, tarefas e instruções sendo que a utilização adequada de métodos e ferramentas é uma condição necessária para sua verificação. Desta forma, apresenta-se uma descrição do estado da arte de análise temporal em sistemas embarcados automotivos, suas propriedades e a utilização das ferramentas da Gliwa para avaliar se os requisitos temporais são atendidos. Um exemplo ilustrativo foi implementado com o propósito de apresentar como os métodos, processos e ferramentas devem ser aplicados para verificar se os requisitos temporais definidos previamente no início do projeto foram atendidos e para que em um sistema já existente possam suportar funções adicionais com requisitos temporais a serem garantidos. É importante verificar que as ferramentas de análise temporal, tem o propósito ainda de verificar se os recursos computacionais estão sendo utilizados de acordo com o especificado no início do projeto. / Automotive embedded systems are characterized by computer systems that support embedded software functionalities to provide users with greater comfort, security and performance. However, there are a number of integrated functions that raise the level of complexity so that appropriate design methods and tools must be used to guarantee the functional and non-functional requirements of the system. All automotive embedded software design must begin with the definition of functional requirements and according to the dynamics of the subsystem that an ECU (Electronic Control Unit) will control and/or manage, it is necessary to define the time requirements. An automotive function may have time requirements of type, activation period, end-to-end delay and deadline among others which in turn are strictly related to the characteristics of the hardware architecture used. In an automotive system there is a distributed embedded computing architecture in which there are tasks and messages that exchange signals between them and may have timing requirements that must be met. The timing analysis for verification and validation of timing constrains can be carried out at the level of distributed architecture, tasks and instructions, and the proper use of methods and tools is a necessary condition for their verification. In this way, a description of the state of the art of timing analysis in automotive embedded systems, their properties and the use of the tools of Gliwa to evaluate if the timing constrains are met. An illustrative example has been implemented with the purpose of presenting how the methods, processes and tools should be applied to verify that the time requirements defined at the beginning of the project are met and that in an existing system can support additional functions with requirements to be guaranteed. It is important to note that timing analysis tools are still intended to verify that computational resources are being used as specified at the beginning of the project.
198

Critérios para adoção e seleção de sistemas operacionais embarcados

Moroz, Maiko Rossano 30 November 2011 (has links)
CNPq / Sistemas embarcados são sistemas computacionais projetados para aplicações específicas, os quais estão presentes em praticamente todos os dispositivos eletrônicos atuais. A utilização de um sistema operacional (SO) é uma maneira de simplificar o desenvolvimento de software, livrando os programadores do gerenciamento do hardware de baixo nível e fornecendo uma interface de programação simples para tarefas que ocorrem com frequência. A alta complexidade dos computadores pessoais atuais torna a utilização de um SO indispensável. Por outro lado, sistemas embarcados são arquiteturas limitadas, geralmente com muitas restrições de custo e consumo. Devido às demandas adicionais impostas por um SO, os desenvolvedores de sistemas embarcados enfrentam a crítica decisão quanto à adoção ou não de um SO. Nesta dissertação, apresenta-se uma série de critérios a fim de auxiliar os projetistas de sistemas embarcados na decisão quanto ao uso ou não de um SO. Além disso, outros critérios são apresentados com o intuito de guiar a seleção do SO mais adequado às características do projeto. Adicionalmente, escolheu-se 15 sistemas operacionais para serem analisados de acordo com os critérios apresentados, os quais podem ser utilizados como base para o processo de seleção de um SO. Adicionalmente, a fim de avaliar o impacto da adoção de um SO em um projeto embarcado, apresenta-se um estudo de caso no qual uma aplicação modelo (uma estação meteorológica embarcada) foi desenvolvida em três diferentes cenários: sem um SO, usando um SO de tempo real (µC/OS-II), e usando um SO de propósito geral (uClinux). Uma FPGA e um SoPC foram utilizados para obter uma plataforma flexível de hardware apta para acomodar as três configurações. A adoção de um SO proporcionou uma redução de até 48% no tempo de desenvolvimento; em contrapartida, isto aumentou os requisitos de memória de programa em pelo menos 71%. / An embedded system (ES) is a computing system designed for a specific purpose, present essentially in every electronic device. The use of an operating system (OS) is advocated as a means to simplify software development, freeing programmers from managing low-level hardware and providing a simpler programming interface for common tasks. The high complexity of modern desktop computers makes an OS indispensable; embedded systems, on the other hand, are limited architectures, usually severely cost- and power-constrained. Because of the additional demands imposed by an OS, embedded developers are faced with the crucial decision of whether to adopt an OS or not. In this work, we introduce a set of criteria to help determine whether an OS should be adopted in an embedded design. We then go further and establish a series of rules to help decide which OS to pick, if one should be used. In addition, we present a case study in which a sample application (an embedded weather station) was developed under three different scenarios: without any OS, using the µC/OS-II real-time OS, and using the uClinux general-purpose OS. An FPGA and a SoPC were used to provide a flexible hardware platform able to accommodate all three configurations. The adoption of an OS provided a reduction of up to 48% in development time; on the other hand, it increased program memory requirements in at least 71%.
199

Software defect prediction using maximal information coefficient and fast correlation-based filter feature selection

Mpofu, Bongeka 12 1900 (has links)
Software quality ensures that applications that are developed are failure free. Some modern systems are intricate, due to the complexity of their information processes. Software fault prediction is an important quality assurance activity, since it is a mechanism that correctly predicts the defect proneness of modules and classifies modules that saves resources, time and developers’ efforts. In this study, a model that selects relevant features that can be used in defect prediction was proposed. The literature was reviewed and it revealed that process metrics are better predictors of defects in version systems and are based on historic source code over time. These metrics are extracted from the source-code module and include, for example, the number of additions and deletions from the source code, the number of distinct committers and the number of modified lines. In this research, defect prediction was conducted using open source software (OSS) of software product line(s) (SPL), hence process metrics were chosen. Data sets that are used in defect prediction may contain non-significant and redundant attributes that may affect the accuracy of machine-learning algorithms. In order to improve the prediction accuracy of classification models, features that are significant in the defect prediction process are utilised. In machine learning, feature selection techniques are applied in the identification of the relevant data. Feature selection is a pre-processing step that helps to reduce the dimensionality of data in machine learning. Feature selection techniques include information theoretic methods that are based on the entropy concept. This study experimented the efficiency of the feature selection techniques. It was realised that software defect prediction using significant attributes improves the prediction accuracy. A novel MICFastCR model, which is based on the Maximal Information Coefficient (MIC) was developed to select significant attributes and Fast Correlation Based Filter (FCBF) to eliminate redundant attributes. Machine learning algorithms were then run to predict software defects. The MICFastCR achieved the highest prediction accuracy as reported by various performance measures. / School of Computing / Ph. D. (Computer Science)
200

Infraestrutura de aquisição de dados por redes de sensores sem fios e barramentos para monitoramento do consumo de energia elétrica

Hara, Elon Cris Penteado 18 December 2013 (has links)
CAPES / Um sistema monitor de consumo de energia elétrica para setores internos de unidades consumidoras foi desenvolvido para um projeto de P&D (Pesquisa e Desenvolvimento) ANEEL (Agência Nacional de Energia Elétrica). O sistema utiliza Redes de Sensores sem Fios (RSSF) alocadas em pontos estratégicos da rede de energia elétrica e conectadas por radioenlace a um banco de dados remoto. Os sensores fornecem dados que são registrados e posteriormente acessados por um aplicativo que apresenta informativos ao usuários por meio de alertas, relatórios e interfaces gráficas. As RSSF são empregadas em sistemas de aquisição de sinais de múltiplos sensores dispersos em grandes áreas, em redes escalonáveis quanto ao número de dispositivos e flexíveis quanto a topologias, que podem ser formadas automaticamente em função das melhores rotas ou das rotas disponíveis em um dado momento. As RSSF também têm sido úteis em sistemas relacionados ao conceito da Smart Grid, oferecendo uma ferramenta “inteligente”, que permite integrar informações colhidas a partir de cargas elétricas nas unidades consumidoras ao sistema de geração e distribuição de energia, com o objetivo de melhorar o desempenho do sistema nos picos de demanda. Apesar da disponibilidade tecnológicas, dispositivos comerciais ainda são escassos no mercado, resultando em custos finais elevados para sistemas de monitoramento de consumo. Assim, para preencher esta lacuna foi desenvolvida uma configuração de infraestrutura de baixo custo, com módulos sensores interligados por barramentos I2C com uma memória compartilhada, formando um cluster que é ligado em RSSF capaz de formar redes em malha por meio do protocolo Miwi P2P (Peer to Peer). Como múltiplos sensores ocupam uma única antena para enviar dados, os custos são minimizados. Outro diferencial do método é dar acesso a sensores concentrados dentro de armários e gabinetes metálicos, que de outra forma não poderiam ser conectados em uma RSSF. / An electric power consumption monitoring system for indoor sectors of costumers was developed to an ANEELR&D project. The system uses wireless sensor networks (WSN) placed at strategic points of the power grid and connected by radio link to a remote database. The sensor provide data, that are recorded and later accessed by a software presenting information to end users through alerts, reports and graphical interfaces. The WSN systems are used to acquire signals from multiple sensors scattered over large areas, with number of devices scalable in networks and flexible topologies, which can be formed automatically choosing the best routes or available routes. The WSNs have been useful in systems related with smart grid, offering a "smart" tool that allows collect information from electric loads in consumer units to the system of generation and distribution of energy, with the aim of improving system performance at peak demand by an electric utility. Despite the availability of technological, commercial devices are still scarce in the Brazilian market, resulting in high costs for final consumption monitoring systems. Therefore, to fill a gap a new configuration was developed presenting an infrastructure affordably with sensor modules connected by I2C bus with a shared memory forming clusters that are connected in WSN but can work in mesh networks through miwi P2P (Peer to Peer) protocol. With multiple sensors sharing a single antenna to send data, costs are reduced. Another exclusive feature of this method is to give access to sensors inside of cabinets and enclosures, which otherwise would not connected to a WSN.

Page generated in 0.0723 seconds