• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 36
  • 17
  • 15
  • 14
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A 68000-based produce sorting microcomputer : graduate clinical research master's report

Haidamus, Ramzi Albert 01 January 1989 (has links) (PDF)
This report discusses in great detail the various research, design, and development stages of the Produce Sorting Microcomputer developed for HAGAN ENGINEERING Inc. The two-semester Clinical Research project has been approved by the graduate committee at the School of Engineering at the University of the Pacific and fulfills the requirements towards a Master Degree in Electrical Engineering. The project was selected based on its complexity, feasibility, the time span it required to complete, and its relevance to the area of real time microcomputer design. In addition, the design constraints and specifications were to be dictated solely by HAGAN ENGINEERING Inc. and all further modifications were to be discussed and approved by HAGAN. These limitations created a professional industry-like atmosphere, which is one of the goals of the Clinical Research Program. A brief User's Manual will accompany the MC68000 board; it will contain all the vital information about the system that a programmer or a technician might need to understand the system. The manual wall contain the complete circuit schematic, a parts list, general design features, and all the software properties of the system (memory map, interrupt tables register map).
32

Desenvolvimento de bancada didatico-experimental de baixo custo para aplicações em controle ativo de vibrações / Design of a didactic ande experimental testbed of low cost for applications in active control of vibration

Amorim, Mauricio Jose 21 February 2006 (has links)
Orientador: Euripedes Guilherme de Oliveira Nobrega / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-06T09:41:01Z (GMT). No. of bitstreams: 1 Amorim_MauricioJose_M.pdf: 2137966 bytes, checksum: bc8b0e13365a14be8d6bd6c336785b31 (MD5) Previous issue date: 2006 / Resumo: Este trabalho apresenta uma bancada didática destinada ao ensino nos cursos de engenharia de aplicações em projetos de controle, processamento digital de sinais e programação em tempo real. Bancadas didáticas são ferramentas indispensáveis ao ensino, considerando que os conceitos vistos apenas em sala de aula são muitas vezes abstratos. A bancada em questão foi desenvolvida partindo de projeto mecânico já existente, tendo sido colocados na estrutura sensores extensométricos para obter a resposta do sistema como deformação e projetado o circuito condicionador do sinal para essa resposta. Para a prirneira fase, envolvendo identificação, controle da estrutura e análise dos resultados, foram projetados dois acionadores para os motores que aplicam o distúrbio e o esforço de controle sobre a planta. Após terem sido satisfeitas as etapas da primeira fase, o próximo passo foi transferir o sistema de controle para uma configuração embarcada utilizando um microcontrolador. Para tanto, algumas adaptações e novos projetos emergiram diante da mudança do ambiente de processamento. A necessidade de adaptar os sinais para a nova placa de aquisição exigiu mudança em alguns circuitos. o acionador de motor para uma saída de controle modulada em largura de pulso foi desenvolvido utilizando componentes discretos. Além disso, é detalhadamente abordada a programação em tempo real do sistema de controle em questão / Abstract: This work presents the design of a didactic tesbed intended to teaching control system design, digital processing of signals and real-time programming. Didactic tesbeds are very vaIuabIe tools when applying concepts developed inside the classroom. The testbed developed here is built upon a previous mechanical design. Strain gages were used as measuring devices using a conditioning circuit. FirstIy, system identification, structural control and analysis of the results were proceeded, two motor drives were designed to operate motors performing both disturbance and control inputs. Afterwards, the control system was redesigned to operate using a micracontraller unit in a embedded architeture. Some adaptations on the circuits were needed and are covered here. A DC motor circuit driver using PWM signal was developed using discrete components. Besides, real-time programming of the control system is covered in detail / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
33

Energy-aware real-time scheduling in embedded multiprocessor systems / Ordonnancement temps réel dans les systèmes embarqués multiprocesseurs contraints par l'énergie

Nélis, Vincent 18 October 2010 (has links)
Nowadays, computer systems are everywhere. From simple portable devices such as watches and MP3 players to large stationary installations that control nuclear power plants, computer systems are now present in all aspects of our modern and every-day life. In about only 70 years, they have completely perturbed our way of life and they reached a so high degree of sophistication that they will be soon capable of driving our cars and cleaning our houses without any human intervention. As computer systems gain in responsibilities, it becomes essential that they provide both safety and reliability. Indeed, a failure in systems such as the anti-lock braking system (ABS) in cars could threaten human lives and generate catastrophic and irreversible consequences. Hence, for many years, researchers have addressed these emerging problems of system safety and reliability which come along with this fulgurant evolution. <p><p>This thesis provides a general overview of embedded real-time computer systems, i.e. a particular kind of computer system whose number grows daily. We provide the reader with some preliminary knowledge and a good understanding of the concepts that underlie this emerging technology. We focus especially on the theoretical problems related to the real-time issue and briefly summarizes the main solutions, together with their advantages and drawbacks. This brings the reader through all the conceptual layers constituting a computer system, from the software level---the logical part---that specifies both the system behavior and requirements to the hardware level---the physical part---that actually performs the expected treatments and reacts to the environment. In the meanwhile, we introduce the theoretical models that allow researchers for theoretical analyses which ensure that all the system requirements are fulfilled. Finally, we address the energy consumption problem in embedded systems. We describe the various factors of power dissipation in modern technologies and we introduce different solutions to reduce this consumption./Cette thèse se focalise sur un type de systèmes informatiques bien précis appelés “systèmes embarqués temps réel”. Un système est dit “embarqué” lorsqu’il est développé afin de servir un but bien précis. Un téléphone portable est un parfait exemple de système embarqué étant donné que toutes ses fonctionnalités sont rigoureusement définies avant même sa conception. Au contraire, un ordinateur personnel n’est généralement pas considéré comme un système embarqué, les concepteurs ne sachant pas à l’avance à quelles fins il sera utilisé. Une grande partie de ces systèmes embarqués ont des contraintes temporelles très fortes, ce qui les distingue encore plus des ordinateurs grand public. A titre d’exemple, lorsqu’un conducteur de voiture freine brusquement, l’ordinateur de bord déclenche l’application ABS et il est primordial que cette application soit traitée endéans une courte échéance. Autrement dit, cette fonctionnalité ABS doit être traitée prioritairement par rapport aux autres fonctionnalités du véhicule. Ce type de système embarqué est alors dit “temps réel”, dû à ces notions de temps et de priorités entre les applications. La problèmatique posée par les systèmes temps réel est la suivante. Comment déterminer, à tout moment, un ordre d’exécution des différentes fonctionnalités de telle sorte qu’elles soient toutes exécutées entièrement endéans leur échéance ?De plus, avec l’apparition récente des systèmes multiprocesseurs, cette problématique s’est fortement complexifiée, vu que le système doit à présent déterminer quelle fonctionnalité s’exécute à quel moment sur quel processeur afin que toutes les contraintes temporelles soient respectées. Pour finir, ces systèmes embarqués temp réel multiprocesseurs se sont rapidement retrouvés confrontés à un problème de consommation d’énergie. Leur demande en terme de performance (et donc en terme d’énergie) à évolué beaucoup plus rapidement que la capacité des batteries qui les alimentent. Ce problème est actuellement rencontré par de nombreux systèmes, tels que les téléphones portables par exemple. L’objectif de cette thèse est de parcourir les différents composants de tels système embarqués et de proposer des solutions afin de réduire leur consommation d’énergie. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
34

Precise Analysis of Private And Shared Caches for Tight WCET Estimates

Nagar, Kartik January 2016 (has links) (PDF)
Worst Case Execution Time (WCET) is an important metric for programs running on real-time systems, and finding precise estimates of a program’s WCET is crucial to avoid over-allocation and wastage of hardware resources and to improve the schedulability of task sets. Hardware Caches have a major impact on a program’s execution time, and accurate estimation of a program’s cache behavior generally leads to significant reduction of its estimated WCET. However, the cache behavior of an access cannot be determined in isolation, since it depends on the access history, and in multi-path programs, the sequence of accesses made to the cache is not fixed. Hence, the same access can exhibit different cache behavior in different execution instances. This issue is further exacerbated in shared caches in a multi-core architecture, where interfering accesses from co-running programs on other cores can arrive at any time and modify the cache state. Further, cache analysis aimed towards WCET estimation should be provably safe, in that the estimated WCET should always exceed the actual execution time across all execution instances. Faced with such contradicting requirements, previous approaches to cache analysis try to find memory accesses in a program which are guaranteed to hit the cache, irrespective of the program input, or the interferences from other co-running programs in case of a shared cache. To do so, they find the worst-case cache behavior for every individual memory access, analyzing the program (and interferences to a shared cache) to find whether there are execution instances where an access can super a cache miss. However, this approach loses out in making more precise predictions of private cache behavior which can be safely used for WCET estimation, and is significantly imprecise for shared cache analysis, where it is often impossible to guarantee that an access always hits the cache. In this work, we take a fundamentally different approach to cache analysis, by (1) trying to find worst-case behavior of groups of cache accesses, and (2) trying to find the exact cache behavior in the worst-case program execution instance, which is the execution instance with the maximum execution time. For shared caches, we propose the Worst Case Interference Placement (WCIP) technique, which finds the worst-case timing of interfering accesses that would cause the maximum number of cache misses on the worst case execution path of the program. We first use Integer Linear Programming (ILP) to find an exact solution to the WCIP problem. However, this approach does not scale well for large programs, and so we investigate the WCIP problem in detail and prove that it is NP-Hard. In the process, we discover that the source of hardness of the WCIP problem lies in finding the worst case execution path which would exhibit the maximum execution time in the presence of interferences. We use this observation to propose an approximate algorithm for performing WCIP, which bypasses the hard problem of finding the worst case execution path by simply assuming that all cache accesses made by the program occur on a single path. This allows us to use a simple greedy algorithm to distribute the interfering accesses by choosing those cache accesses which could be most affected by interferences. The greedy algorithm also guarantees that the increase in WCET due to interferences is linear in the number of interferences. Experimentally, we show that WCIP provides substantial precision improvement in the final WCET over previous approaches to shared cache analysis, and the approximate algorithm almost matches the precision of the ILP-based approach, while being considerably faster. For private caches, we discover multiple scenarios where hit-miss predictions made by traditional Abstract Interpretation-based approaches are not sufficient to fully capture cache behavior for WCET estimation. We introduce the concept of cache miss paths, which are abstractions of program path along which an access can super a cache miss. We propose an ILP-based approach which uses cache miss paths to find the exact cache behavior in the worst-case execution instance of the program. However, the ILP-based approach needs information about the worst-case execution path to predict the cache behavior, and hence it is difficult to integrate it with other micro-architectural analysis. We then show that most of the precision improvement of the ILP-based approach can be recovered without any knowledge of the worst-case execution path, by a careful analysis of the cache miss paths themselves. In particular, we can use cache miss paths to find the worst-case behavior of groups of cache accesses. Further, we can find upper bounds on the maximum number of times that cache accesses inside loops can exhibit worst-case behavior. This results in a scalable, precise method for performing private cache analysis which can be easily integrated with other micro-architectural analysis.
35

Parallel acceleration of deadlock detection and avoidance algorithms on GPUs

Abell, Stephen W. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Current mainstream computing systems have become increasingly complex. Most of which have Central Processing Units (CPUs) that invoke multiple threads for their computing tasks. The growing issue with these systems is resource contention and with resource contention comes the risk of encountering a deadlock status in the system. Various software and hardware approaches exist that implement deadlock detection/avoidance techniques; however, they lack either the speed or problem size capability needed for real-time systems. The research conducted for this thesis aims to resolve issues present in past approaches by converging the two platforms (software and hardware) by means of the Graphics Processing Unit (GPU). Presented in this thesis are two GPU-based deadlock detection algorithms and one GPU-based deadlock avoidance algorithm. These GPU-based algorithms are: (i) GPU-OSDDA: A GPU-based Single Unit Resource Deadlock Detection Algorithm, (ii) GPU-LMDDA: A GPU-based Multi-Unit Resource Deadlock Detection Algorithm, and (iii) GPU-PBA: A GPU-based Deadlock Avoidance Algorithm. Both GPU-OSDDA and GPU-LMDDA utilize the Resource Allocation Graph (RAG) to represent resource allocation status in the system. However, the RAG is represented using integer-length bit-vectors. The advantages brought forth by this approach are plenty: (i) less memory required for algorithm matrices, (ii) 32 computations performed per instruction (in most cases), and (iii) allows our algorithms to handle large numbers of processes and resources. The deadlock detection algorithms also require minimal interaction with the CPU by implementing matrix storage and algorithm computations on the GPU, thus providing an interactive service type of behavior. As a result of this approach, both algorithms were able to achieve speedups over two orders of magnitude higher than their serial CPU implementations (3.17-317.42x for GPU-OSDDA and 37.17-812.50x for GPU-LMDDA). Lastly, GPU-PBA is the first parallel deadlock avoidance algorithm implemented on the GPU. While it does not achieve two orders of magnitude speedup over its CPU implementation, it does provide a platform for future deadlock avoidance research for the GPU.
36

Real-time adaptive-optics optical coherence tomography (AOOCT) image reconstruction on a GPU

Shafer, Brandon Andrew January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Adaptive-optics optical coherence tomography (AOOCT) is a technology that has been rapidly advancing in recent years and offers amazing capabilities in scanning the human eye in vivo. In order to bring the ultra-high resolution capabilities to clinical use, however, newer technology needs to be used in the image reconstruction process. General purpose computation on graphics processing units is one such way that this computationally intensive reconstruction can be performed in a desktop computer in real-time. This work shows the process of AOOCT image reconstruction, the basics of how to use NVIDIA's CUDA to write parallel code, and a new AOOCT image reconstruction technology implemented using NVIDIA's CUDA. The results of this work demonstrate that image reconstruction can be done in real-time with high accuracy using a GPU.

Page generated in 0.1137 seconds