• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 223
  • 59
  • 56
  • 55
  • 29
  • 25
  • 23
  • 18
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 611
  • 158
  • 116
  • 106
  • 90
  • 90
  • 77
  • 63
  • 57
  • 55
  • 53
  • 52
  • 50
  • 49
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Gravure dynamique : visualisation par modèle physique pour l'animation et les réalités virtuelles / Dynamic Engraving : Physically-based visualization for animation and virtual realities

Sillam, Kevin 14 December 2011 (has links)
Le modèle physique masses interactions est puissant pour la simulation de comportements dynamiques très divers et pour la production de mouvements expressifs, riches et d'une grande complexité. En revanche, une difficulté inhérente à ce type de formalisme pour la production d'images animées réside dans le fait que les masses ponctuelles n'ont pas de spatialité ; il est donc difficile de produire des séquences d'images animées par le rendu direct des masses ponctuelles décrivant le mouvement. D'une manière générale, il est donc nécessaire de développer des méthodes qui étendent la spatialité de ces masses ponctuelles pour compléter la chaîne de production d'images animées par modèle physique particulaire. Une méthode, proposée par le laboratoire ICA, répond à ce type de problématique en permettant d'étendre la spatialité des masses ponctuelles en considérant l'interaction physique entre ces masses et un milieu. Il s'agit d'une métaphore du procédé physique de la gravure. Celle ci a permis de produire des images animées convaincantes de divers phénomènes visuels. Nous présentons dans ce document un élargissement de cette méthode notamment au cas 3D, ainsi qu'à de nouveaux comportements. De plus, l'algorithme de cette méthode a été parallélisé, ce qui nous a permis d'obtenir des simulations calculées en temps réel en utilisant la puissance actuelle des cartes graphiques. Afin de maitriser au mieux les possibilités de la méthode, nous avons développé un logiciel comprenant une interface graphique manipulable et interactive permettant de modéliser avec aisance différents comportements. Cette méthode a été intégrée dans des installations interactives artistiques multi-sensorielles fournissant un comportement dynamique riche et configurable, tout en permettant une interaction en temps réel avec le spectateur. / Mass – Interaction physical modeling is a powerful formalism for the simulation of various dynamic behaviors and for the production of expressive, rich and complex motions. However, there is an inherent matter of this type of formalism for animation production, which resides on the fact that masses have no spatiality. Thus, it is difficult to produce animation sequences directly from rendering mass point describing the movement. It is then necessary to develop methods that extend the masses spatiality in order to complete the animation process. ICA Laboratory addressed the problem with a method based on the physical simulation of interaction between these masses and a dynamic milieu, according to the metaphor of engraving. We present in this document an extension of this method notably towards 3D and other effects. Besides, the parallel implementation on Graphic Cards (GPU) allowed obtaining real time simulation. An interactive graphical interface was also developed to facilitate the creation of different models. We used this process in multi-sensory interactive art installations for its rich and dynamic ability to create shape from motion and interact in real time with spectators.
162

Measuring Soft Error Sensitivity of FPGA Soft Processor Designs Using Fault Injection

Harward, Nathan Arthur 01 March 2016 (has links)
Increasingly, soft processors are being considered for use within FPGA-based reliable computing systems. In an environment in which radiation is a concern, such as space, the logic and routing (configuration memory) of soft processors are sensitive to radiation effects, including single event upsets (SEUs). Thus, effective tools are needed to evaluate and estimate how sensitive the configuration memories of soft processors are in high-radiation environments. A high-speed FPGA fault injection system and methodology were created using the Xilinx Radiation Test Consortium's (XRTC's) Virtex-5 radiation test hardware to conduct exhaustive tests of the SEU sensitivity of a design within an FPGA's configuration memory. This tool was used to show that the sensitivity of the configuration memory of a soft processor depends on several variables, including its microarchitecture, its customizations and features, and the software instructions that are executed. The fault injection experiments described in this thesis were performed on five different soft processors, i.e., MicroBlaze, LEON3, Arm Cortex-M0 DesignStart, OpenRISC 1200, and PicoBlaze. Emphasis was placed on characterizing the sensitivity of the MicroBlaze soft processor and the dependence of the sensitivity on various modifications. Seven benchmarks were executed through the various experiments and used to determine the SEU sensitivity of the soft processor's configuration memory to the instructions that were executed. In this thesis, a wide variety of soft processor fault injection results are presented to show the differences in sensitivity between multiple soft processors and the software they run.
163

Self-tuning dynamic voltage scaling techniques for processor design

Park, Junyoung 30 January 2014 (has links)
The Dynamic Voltage Scaling (DVS) technique has proven to be ideal in regard to balancing performance and energy consumption of a processor since it allows for almost cubic reduction in dynamic power consumption with only a nearly linear reduction in performance. Due to its virtue, the DVS technique has been used for the two main purposes: energy-saving and temperature reduction. However, recently, a Dynamic Voltage Scaled (DVS) processor has lost its appeal as process technology advances due to the increasing Process, Voltage and Temperature (PVT) variations. In order to make a processor tolerant to the increasing uncertainties caused by such variations, processor designers have used more timing margins. Therefore, in a modern-day DVS processor, reducing voltage requires comparatively more performance degradation when compared to its predecessors. For this reason, this technique has a lot of room for improvement for the following facts. (a) From an energy-saving viewpoint, excessive margins to account for the worst-case operating conditions in a DVS processor can be exploited because they are rarely used during run-time. (b) From a temperature reduction point of view, accurate prediction of the optimal performance point in a DVS processor can increase its performance. In this dissertation, we propose four performance improvement ideas from two different uses of the DVS technique. In regard to the DVS technique for energy-saving, in this dissertation, we introduce three different types of margin reduction (or margin decision) techniques. First, we introduce a new indirect Critical Path Monitor (CPM) to make a conventional DVS processor adaptive to its given environment. Our CPM is composed of several Slope Generators, each of which generates similar voltage scaling slopes to those of potential critical paths under a process corner. Each CPR in the Slope Generator tracks the delays of potential critical paths with minimum difference at any condition in a certain voltage range. The CPRs in the same Slope Generator are connected to a multiplexer and one of them is selected according to a current voltage level. Calibration steps are done by using conventional speed-binning process with clock duty-cycle modulation. Second, we propose a new direct CPM that is based on a non-speculative pre-sampling technique. A processor that is based on this technique predicts timing errors in the actual critical paths and undertakes preventive steps in order to avoid the timing errors in the event that the timing margins fall below a critical level. Unlike direct CPM that uses circuit-level speculative operation, although the shadow latch can have timing error, the main Flip-Flop (FF) of our direct CPM never fails, guaranteeing always-correct operation of the processor. Our non-speculative CPM is more suitable for high-performance processor designs than the speculative CPM in that it does not require original design modification and has lower power overhead. Third, we introduce a novel method that determines the most accurate margin that is based on the conventional binning process. By reusing the hold-scan FFs in a processor, we reduce design complexity, minimize hardware overhead and increase error detecting accuracy. Running workloads on the processor with Stop-Go clock gating allows us to find which paths have timing errors during the speed binning steps at various, fixed temperature levels. From this timing error information, we can determine the different maximum frequencies for diverse operating conditions. This method has high degree of accuracy without having a large overhead. In regard to the DVS technique for temperature reduction, we introduce a run-time temperature monitoring scheme that predicts the optimal performance point in a DVS processor with high accuracy. In order to increase the accuracy of the optimal performance point prediction, this technique monitors the thermal stress of a processor during run-time and uses several Look-Up Tables (LUTs) for different process corners. The monitoring is performed while applying Stop-Go clock gating, and the average EN value is calculated at the end of the monitoring time. Prediction of the optimal performance point is made using the average EN value and one of the LUTs that corresponds to the process corner under which the processor was manufactured. The simulation results show that we can achieve maximum processor performance while keeping the processor temperature within threshold temperature. / text
164

Šiuolaikinių procesorių architektūrų tyrimas, našumo lyginamoji analizė / Investigation on architectures of processors and comparative analysis of their efficiency

Kislauskas, Nerijus 21 May 2005 (has links)
The work deals with main aspects of computer efficiency increase. Object of investigation is a system consisting of processor, memory, other components and connecting links called buses. Rather different systems are used in modern world of information, so an interest arises to compare architectures of several producers. Comparison of systems is quite possible as the same architectural features bind them together: processor‘s clock speed, cache, memory, dual channel technology and others. To perform a comparative analysis, software has been used enabling to reveal increase of efficiency of separate computer components. Systems chosen for study are rather new from the point of view of technology: Intel Pentium 4, AMD Athlon XP and AMD Sempron. Experiments having been fulfilled, it came out that efficiency of a system for the most part depends on processor capacity: increase of its clock speed results in 9 – 13%, L1 cache has an effect even up to 1350% (theoretically), L2 cache – 30 – 38%. HyperThreading has been observed to mostly result in operations with floating point numbers (even up to 68%), and branch prediction would have theoretically to increase efficiency up to 47%. Estimating indicator of efficiency of the whole system, the results show that the main role belongs to processor. Influence of other components of the system is less noticeable. Working peculiarities of memory type determine rate of data selection and transmission from memory. The study has shown that... [to full text]
165

Lastbalanseringskluster : En studie om operativsystemets påverkan på lastbalanseraren

Liv, Jakob, Nygren, Fredrik January 2014 (has links)
Denna rapport innehåller en studie över ett operativsystems påverkan på lastbalanserarenHAproxy. Studien utfördes i en experimentmiljö med fyra virtuella testklienter, en lastbalanseraresamt tre webbservernoder kopplade till lastbalanseraren. Operativsystemet varhuvudpunkten i studien där belastningen på dess hårdvara, svarstiden, antalet anslutningarsamt det maximala antalet anslutninger per sekund undersöktes. De operativsystem somtestades var Ubuntu 10.04, CentOS 6.5, FreeBSD 9.1 och OpenBSD 5.5. Resultaten fråntesterna visar att hårdvaran och svarstiden är näst intill identisk på samtliga operativsystemmed undantag för OpenBSD där förutsättningarna för att genomföra hårdvarutesternainte kunde uppnås. FreeBSD var det operativsystem som klarade av att hantera flestantal anslutningar tillsammans med CentOS. Ubuntu visade sig vara mer begränsat ochOpenBSD var mycket begränsat. FreeBSD klarade även av högst antal anslutningar persekund, följt av Ubuntu, CentOS och slutligen OpenBSD som visade sig vara det sämstpresterande. / This report contains a study over an operating system’s impact on the load balancerHAproxy. The study was performed in an experimental environment with four virtualclients for testing, one load balancer and three web server nodes connected to the loadbalancer. The operating system was the main point in the study where the load on theload balancer’s hardware, the response time, the amount of connections and the maximumamount of connections per second were examined. The operating systems whichwere tested was Ubuntu 10.04, CentOS 6.5, FreeBSD 9.1 and OpenBSD 5.5. The resultsfrom the tests shows that the load on the hardware and the response time are almost identicalon all operating systems with the exception of OpenBSD where the conditions to beable to run the hardware tests could not be achieved. FreeBSD was the operating systemthat was able to manage the highest amount of connections along with CentOS. Ubuntuturned out to be more limited and OpenBSD was very limited. FreeBSD also managedthe highest amount of connections per second, followed by Ubuntu, CentOS and finallyOpenBSD which turned out to be the worst performer.
166

Development of an integrated co-processor based power electronic drive / by Robert D. Hudson

Hudson, Robert Dearn January 2008 (has links)
The McTronX research group at the North-West University is currently researching self-sensing techniques for Active Magnetic Bearings (AMB). The research is part of an ongoing effort to expand the knowledge base on AMBs in the School of Electrical, Electronic and Computer Engineering to support industries that make use of the technology. The aim of this project is to develop an integrated co-processor based power electronic drive with the emphasis placed on the ability of the co-processor to execute AMB self-sensing algorithms. The two primary techniques for implementing self-sensing in AMBs are state estimation and modulation. This research focuses on hardware development to facilitate the implementation of the modulation method. Self-sensing algorithms require concurrent processing power and speed that are well suited to an architecture that combines a digital signal processor (DSP) and a field programmable gate array (FPGA). A comprehensive review of various power amplifier topologies shows that the pulse width modulation (PWM) switching amplifier is best suited for controlling the voltage and current required to drive the AMB coils. Combining DSPs and power electronics to form an integrated co-processor based power electronic drive requires detail attention to aspects of PCB design, including signal integrity and grounding. A conceptual design is conducted and forms part of the process of compiling a subsystem development specification for the integrated drive, in conjunction with the McTronX Research Group. Component selection criteria, trade-off studies and various circuit simulations serve as the basis for this essential phase of the project. The conceptual design and development specification determines the architecture, functionality and interfaces of the integrated drive. Conceptual designs for the power amplifier, digital controller, electronic supply and mechanical layout of the integrated drive is provided. A detail design is performed for the power amplifier, digital controller and electronic supply. Issues such as component selection, power supply requirements, thermal design, interfacing of the various circuit elements and PCB design are covered in detail. The output of the detail design is a complete set of circuit diagrams for the integrated controller. The integrated drive is interfaced with existing AMB hardware and facilitates the successful implementation of two self-sensing techniques. The hardware performance of the integrated coprocessor based power electronic drive is evaluated by means of measurements taken from this experimental self-sensing setup. The co-processor performance is evaluated in terms of resource usage and execution time and performs satisfactorily in this regard. The integrated co-processor based power electronic drive provided sufficient resources, processing speed and flexibility to accommodate a variety of self-sensing algorithms thus contributing to the research currently underway in the field of AMBs by the McTronX research group at the North-West University. / Thesis (M.Ing. (Electrical Engineering))--North-West University, Potchefstroom Campus, 2009.
167

Development of an integrated co-processor based power electronic drive / by Robert D. Hudson

Hudson, Robert Dearn January 2008 (has links)
The McTronX research group at the North-West University is currently researching self-sensing techniques for Active Magnetic Bearings (AMB). The research is part of an ongoing effort to expand the knowledge base on AMBs in the School of Electrical, Electronic and Computer Engineering to support industries that make use of the technology. The aim of this project is to develop an integrated co-processor based power electronic drive with the emphasis placed on the ability of the co-processor to execute AMB self-sensing algorithms. The two primary techniques for implementing self-sensing in AMBs are state estimation and modulation. This research focuses on hardware development to facilitate the implementation of the modulation method. Self-sensing algorithms require concurrent processing power and speed that are well suited to an architecture that combines a digital signal processor (DSP) and a field programmable gate array (FPGA). A comprehensive review of various power amplifier topologies shows that the pulse width modulation (PWM) switching amplifier is best suited for controlling the voltage and current required to drive the AMB coils. Combining DSPs and power electronics to form an integrated co-processor based power electronic drive requires detail attention to aspects of PCB design, including signal integrity and grounding. A conceptual design is conducted and forms part of the process of compiling a subsystem development specification for the integrated drive, in conjunction with the McTronX Research Group. Component selection criteria, trade-off studies and various circuit simulations serve as the basis for this essential phase of the project. The conceptual design and development specification determines the architecture, functionality and interfaces of the integrated drive. Conceptual designs for the power amplifier, digital controller, electronic supply and mechanical layout of the integrated drive is provided. A detail design is performed for the power amplifier, digital controller and electronic supply. Issues such as component selection, power supply requirements, thermal design, interfacing of the various circuit elements and PCB design are covered in detail. The output of the detail design is a complete set of circuit diagrams for the integrated controller. The integrated drive is interfaced with existing AMB hardware and facilitates the successful implementation of two self-sensing techniques. The hardware performance of the integrated coprocessor based power electronic drive is evaluated by means of measurements taken from this experimental self-sensing setup. The co-processor performance is evaluated in terms of resource usage and execution time and performs satisfactorily in this regard. The integrated co-processor based power electronic drive provided sufficient resources, processing speed and flexibility to accommodate a variety of self-sensing algorithms thus contributing to the research currently underway in the field of AMBs by the McTronX research group at the North-West University. / Thesis (M.Ing. (Electrical Engineering))--North-West University, Potchefstroom Campus, 2009.
168

Sensores e interfaces com aplica??es em motor mancal

Sousa Filho, Jo?o Coelho de 19 December 2011 (has links)
Made available in DSpace on 2014-12-17T14:55:55Z (GMT). No. of bitstreams: 1 JoaoCSF_DISSERT.pdf: 4517412 bytes, checksum: 2112619de393ca975c806fa334e083ff (MD5) Previous issue date: 2011-12-19 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Relevant researches have been growing on electric machine without mancal or bearing and that is generally named bearingless motor or specifically, mancal motor. In this paper it is made an introductory presentation about bearingless motor and its peripherical devices with focus on the design and implementation of sensors and interfaces needed to control rotor radial positioning and rotation of the machine. The signals from the machine are conditioned in analogic inputs of DSP TMS320F2812 and used in the control program. This work has a purpose to elaborate and build a system with sensors and interfaces suitable to the input and output of DSP TMS320F2812 to control a mancal motor, bearing in mind the modularity, simplicity of circuits, low number of power used, good noise imunity and good response frequency over 10 kHz. The system is tested at a modified ordinary induction motor of 3,7 kVA to be used with a bearingless motor with divided coil / Relevantes pesquisas v?m sendo desenvolvidas em m?quinas el?tricas sem mancais mec?nicos ou rolamentos e que s?o, genericamente, denominadas m?quinas sem mancais ou, em car?ter espec?fico, motor mancal. Neste trabalho faz-se uma abordagem introdut?ria sobre as m?quinas sem mancais e apresenta??o de seus dispositivos perif?ricos enfatizado o projeto e implementa??o de sensores e interfaces necess?rios ao controle de posicionamento radial do rotor e rota??o da m?quina. Os sinais oriundos da m?quina s?o condicionados ?s entradas anal?gicas do DSP TMS320F2812 e utilizados no programa de controle. Este trabalho tem por proposta elaborar e implementar um sistema envolvendo sensores e interfaces compat?veis as entradas e sa?das do DSP TMS320F2812, para controle de um motor mancal, tendo como foco a modularidade, simplicidade de circuitos, redu??o das fontes de alimenta??o, melhoria na imunidade a ru?dos e melhor resposta em frequ?ncia acima de 10 kHz. O sistema ? testado em um motor de indu??o de 3,7 kVA modificado para operar como uma m?quina sem mancais com bobinado dividido
169

[en] HIGH RESOLUTION GRAPHIC SYSTEM / [pt] PROCESSADOR GRÁFICO PARA SISTEMAS DE ALTA RESOLUÇÃO

MARCELO ROBERTO BAPTISTA PEREIRA LUIS JIMENEZ 18 June 2007 (has links)
[pt] Neste trabalho a arquitetura de placas gráficas que usam a tecnologia de varredura (raster scan) é analisada. É discutido então o uso de memórias dinâmicas do tipo VRAM para lidar com o problema do gargalo dos acessos à memória de vídeo. São analisados então alguns módulos importantes que podem ser considerados opcionais numa placa de vídeo, uma vez que a escolha por estes módulos depende da aplicação específica da placa gráfica. Finalmente, apresentamos a descrição do projeto e implementação de uma placa gráfica utilizando o processador gráfico TMS34010 com capacidade para realizar aquisição de imagens. / [en] In the present work, we analyse the architecture of raster- scan graphic boards. We discuss then the use of VRAM dynamic memories to deal with the video memories bottleneck problem. We also analyse a few important modules that may be considered optionals, since the choice of using these modules depends upon the specific use the graphic board will be given. At last, we present the description of the project and implementation of a graphic board using the TMS34010 graphic processor with image acquisition capabilities.
170

Paměťový subsystém v SystemC / SystemC Memory Subsystem

Michl, Kamil January 2020 (has links)
This thesis deals with the design and implementation of a processor simulation memory subsystem. The memory subsystem is designed using the Transaction Level Modeling approach. The implementation is done in C++ language utilizing the SystemC library. The processor simulation is adopted from the Codasip company simulator. The objective is to create a functional connection between the processor and the memory inside the simulator. This connection supports communication protocols of AHB3-lite, AXI4-lite, CPB, and CPB-lite buses. The new implementation of the aforementioned connection and the memory is integrated into the original simulator. The resulting simulator is tested using unit tests.

Page generated in 0.0908 seconds