• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 7
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 55
  • 55
  • 20
  • 13
  • 12
  • 12
  • 11
  • 11
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Cache Prediction and Execution Time Analysis on Real-Time MPSoC

Neikter, Carl-Fredrik January 2008 (has links)
Real-time systems do not only require that the logical operations are correct. Equally important is that the specified time constraints always are complied. This has successfully been studied before for mono-processor systems. However, as the hardware in the systems gets more complex, the previous approaches become invalidated. For example, multi-processor systems-on-chip (MPSoC) get more and more common every day, and together with a shared memory, the bus access time is unpredictable in nature. This has recently been resolved, but a safe and not too pessimistic cache analysis approach for MPSoC has not been investigated before. This thesis has resulted in designed and implemented algorithms for cache analysis on real-time MPSoC with a shared communication infrastructure. An additional advantage is that the algorithms include improvements compared to previous approaches for mono-processor systems. The verification of these algorithms has been performed with the help of data flow analysis theory. Furthermore, it is not known how different types of cache miss characteristic of a task influence the worst case execution time on MPSoC. Therefore, a program that generates randomized tasks, according to different parameters, has been constructed. The parameters can, for example, influence the complexity of the control flow graph and average distance between the cache misses.
52

Développement d'un simulateur pour le X-ray integral field unit : du signal astrophysique à la performance instrumentale / Development of an End-to-End simulator for the X-ray Integral Field Unit : from the astrophysical signal to the instrument performance

Peille, Philippe 28 September 2016 (has links)
Cette thèse est consacrée au développement d'un modèle End-to-End pour le spectrocalorimètre X-IFU qui observera à partir de 2028 l'Univers en rayons X avec une précision jamais atteinte auparavant. Ce travail s'est essentiellement organisé en deux parties. J'ai dans un premier temps étudié la dynamique des parties les plus internes des binaires X de faible masse à l'aide de deux sondes particulières que sont les sursauts X et les oscillations quasi-périodiques au kHz (kHz QPOs). En me basant sur les données d'archive du satellite Rossi X-ray Timing Explorer et sur des méthodes d'analyse spécifiquement développées dans ce but, j'ai notamment pu mettre en évidence pour la première fois une réaction du premier sur le second, confirmant le lien très étroit entre ces oscillations et les parties les plus internes du système. Le temps de rétablissement du système suite aux sursauts entre également en conflit dans la plupart des cas avec l'augmentation supposée du taux d'accrétion suite à ces explosions. Au travers d'une analyse spectro-temporelle complète des deux kHz QPOs de 4U 1728-34, j'ai également pu confirmer l'incompatibilité des spectres de retard des deux QPOs qui suggère une origine différente de ces deux oscillations. L'étude de leurs spectres de covariance, obtenus pour la première fois dans cette thèse, a quant à elle mis en évidence le rôle central de la couche de Comptonisation et potentiellement celui d'une zone particulièrement compacte de la couche limite pour l'émission des QPOs. Dans le second volet de ma thèse, j'ai développé un simulateur End-to-End pour l'instrument X-IFU permettant de représenter l'ensemble du processus menant à une observation scientifique en rayons X, de l'émission des photons par une source jusqu'à leur mesure finale à bord du satellite. J'ai notamment mis en place des outils permettant la comparaison précise de plusieurs matrices de détecteurs en prenant en compte les effets de la reconstruction du signal brut issu des électroniques de lecture. Cette étude a mis en évidence l'intérêt de configurations hybrides, contenant une sous-matrice de petits pixels capables d'améliorer par un ordre de grandeur la capacité de comptage de l'instrument. Une solution alternative consisterait à défocaliser le miroir lors de l'observation de sources ponctuelles brillantes. Situées au coeur de la performance du X-IFU, j'ai également comparé de manière exhaustive différentes méthodes de reconstruction des signaux bruts issus des détecteurs X-IFU. Ceci a permis de montrer qu'à faible coût en termes de puissance de calcul embarquée, une amélioration significative de la résolution en énergie finale de l'instrument pouvait être obtenue à l'aide d'algorithmes plus sophistiqués. En tenant compte des contraintes de calibration, le candidat le plus prometteur apparaît aujourd'hui être l'analyse dans l'espace de résistance. En me servant de la caractérisation des performances des différents types de pixels, j'ai également mis en place une méthode de simulation rapide et modulable de l'ensemble de l'instrument permettant d'obtenir des observations synthétiques à long temps d'exposition de sources X très complexes, représentatives des futures capacités du X-IFU. Cet outil m'a notamment permis d'étudier la sensibilité de cet instrument aux effets de temps mort et de confusion, mais également d'estimer sa future capacité à distinguer différents régimes de turbulence dans les amas de galaxies et de mesurer leur profil d'abondance et de température. A plus long terme ce simulateur pourra servir à l'étude d'autres cas scientifiques, ainsi qu'à l'analyse d'effets à l'échelle de l'ensemble du plan de détection tels que la diaphonie entre pixels. / This thesis is dedicated to the development of an End-ta-End model for the X-IFU spectrocalorimeter scheduled for launch in 2028 on board the Athena mission and which will observe the X-ray universe with unprecedented precision. This work has been mainly organized in two parts. I studied first the dynamics of the innermost parts of low mass X-ray binaries using two specific probes of the accretion flow: type I X-ray bursts and kHz quasi-periodic oscillations (kHz QPOs). Starting from the archivai data of the Rossi X-ray Timing Explorer mission and using specific data analysis techniques, I notably highlighted for the first time a reaction of the latter to the former, confirming the tight link between this oscillation and the inner parts of the system. The measured recovery time was also found in conflict with recent claims of an enhancement of the accretion rate following these thermonuclear explosions. From the exhaustive spectral timing analysis of both kHz QPOs in 4U 1728-34, I further confirmed the inconsistancy of their lag energy spectra, pointing towards a different origin for these two oscillations. The study of their covariance spectra, obtained here for the first time, has revealed the key role of the Comptonization layer, and potentially of a more compact part of it, in the emission of the QPOs. In the second part of my thesis, I focused on the development of an End-to-:End simulator for the X-IFU capable of depicting the full process leading to an X-ray observation, from the photon emission by the astrophysical source to their on-board detection. I notably implemented tools allowing the precise comparison of different potential pixel array configurations taking into account the effects of the event reconstruction from the raw data coming from the readout electronics. This study highlighted the advantage of using hybrid arrays containing a small pixel sub-array capable of improving by an order of magnitude the count rate capability of the instrument. An alternative solution would consist in defocusing the mirror during the observation of bright point sources. Being a key component of the overall X-IFU performance, I also thoroughly compared different reconstruction methods of the pixel raw signal. This showed that with a minimal impact on the required on-board processing power, a significant improvement of the final energy resolution could be obtained from more sophisticated reconstruction methods. Taking into account the calibration constraints, the most promising candidate currently appears to be the so-called "resistance space analysis". Taking advantage of the obtained performance characterization of the different foreseen pixel types, I also developed a fast and modular simulation method of the complete instrument providing representative synthetic observations with long exposure times of complex astrophysical sources suffinguish different turbulence regimes in galaxy clusters and to measure abundance and temperature profiles. In the longer run, this simulator will be useful for the study of other scientific cases as well as the analysis of instrumental effects at the full detection plane level such as pixel crosstalk.
53

Architectures and Protocols for Performance Improvements of Real-Time Networks

Kunert, Kristina January 2010 (has links)
When designing architectures and protocols for data traffic requiring real-time services, one of the major design goals is to guarantee that traffic deadlines can be met. However, many real-time applications also have additional requirements such as high throughput, high reliability, or energy efficiency. High-performance embedded systems communicating heterogeneous traffic with high bandwidth and strict timing requirements are in need of more efficient communication solutions, while wireless industrial applications, communicating control data, require support of reliability and guarantees of real-time predictability at the same time. To meet the requirements of high-performance embedded systems, this thesis work proposes two multi-wavelength high-speed passive optical networks. To enable reliable wireless industrial communications, a framework in­corporating carefully scheduled retransmissions is developed. All solutions are based on a single-hop star topology, predictable Medium Access Control algorithms and Earliest Deadline First scheduling, centrally controlled by a master node. Further, real-time schedulability analysis is used as admission control policy to provide delay guarantees for hard real-time traffic. For high-performance embedded systems an optical star network with an Arrayed Waveguide Grating placed in the centre is suggested. The design combines spatial wavelength re­use with fixed-tuned and tuneable transceivers in the end nodes, enabling simultaneous transmis­sion of both control and data traffic. This, in turn, permits efficient support of heterogeneous traf­fic with both hard and soft real-time constraints. By analyzing traffic dependencies in this mul­tichannel network, and adapting the real-time schedulability analysis to incorporate these traffic dependencies, a considerable increase of the possible guaranteed throughput for hard real-time traffic can be obtained. Most industrial applications require using existing standards such as IEEE 802.11 or IEEE 802.15.4 for interoperability and cost efficiency. However, these standards do not provide predict­able channel access, and thus real-time guarantees cannot be given. A framework is therefore de­veloped, combining transport layer retransmissions with real-time analysis admission control, which has been adapted to consider retransmissions. It can be placed on top of many underlying communication technologies, exemplified in our work by the two aforementioned wireless stan­dards. To enable a higher data rate than pure IEEE 802.15.4, but still maintaining its energy saving properties, two multichannel network architectures based on IEEE 802.15.4 and encompassing the framework are designed. The proposed architectures are evaluated in terms of reliability, utiliza­tion, delay, complexity, scalability and energy efficiency and it is concluded that performance is enhanced through redundancy in the time and frequency domains.
54

Random Local Delay Variability : On-chip Measurement And Modeling

Das, Bishnu Prasad 06 1900 (has links)
This thesis focuses on random local delay variability measurement and its modeling. It explains a circuit technique to measure the individual logic gate delay in silicon to study within-die variation. It also suggests a Process, Voltage and Temperature (PVT)-aware gate delay model for voltage and temperature scalable linear Statistical Static Timing Analysis (SSTA). Technology scaling allows packing billions of transistors inside a single chip. However, it is difficult to fabricate very small transistor with deterministic characteristic which leads to variations. Transistor level random local variations are growing rapidly in each technology generation. However, there is requirement of quantification of variation in silicon. We propose an all-digital circuit technique to measure the on-chip delay of an individual logic gate (both inverting and non-inverting) in its unmodified form based on a reconfigurable ring oscillator structure. A test chip is fabricated in 65nm technology node to show the feasibility of the technique. Delay measurements of different nominally identical inverters in close physical proximity show variations of up to 28% indicating the large impact of local variations. The huge random delay variation in silicon motivates the inclusion of random local process parameters in delay model. In today’s low power design with multiple supply domain leads to non-uniform supply profile. The switching activity across the chip is not uniform which leads to variation of temperature. Accurate timing prediction motivates the necessity of Process, Voltage and Temperature (PVT) aware delay model. We use neural networks, which are well known for their ability to approximate any arbitrary continuous function. We show how the model can be used to derive sensitivities required for voltage and temperature scalable linear SSTA for an arbitrary voltage and temperature point. Using the voltage and temperature scalable linear SSTA on ISCAS 85 benchmark shows promising results with average error in mean delay is less than 1.08% and average error in standard deviation is less than 2.65% and errors in predicting the 99% and 1% probability point are 1.31% and 1% respectively with respect to SPICE.
55

Modeling and Analysis of Large-Scale On-Chip Interconnects

Feng, Zhuo 2009 December 1900 (has links)
As IC technologies scale to the nanometer regime, efficient and accurate modeling and analysis of VLSI systems with billions of transistors and interconnects becomes increasingly critical and difficult. VLSI systems impacted by the increasingly high dimensional process-voltage-temperature (PVT) variations demand much more modeling and analysis efforts than ever before, while the analysis of large scale on-chip interconnects that requires solving tens of millions of unknowns imposes great challenges in computer aided design areas. This dissertation presents new methodologies for addressing the above two important challenging issues for large scale on-chip interconnect modeling and analysis: In the past, the standard statistical circuit modeling techniques usually employ principal component analysis (PCA) and its variants to reduce the parameter dimensionality. Although widely adopted, these techniques can be very limited since parameter dimension reduction is achieved by merely considering the statistical distributions of the controlling parameters but neglecting the important correspondence between these parameters and the circuit performances (responses) under modeling. This dissertation presents a variety of performance-oriented parameter dimension reduction methods that can lead to more than one order of magnitude parameter reduction for a variety of VLSI circuit modeling and analysis problems. The sheer size of present day power/ground distribution networks makes their analysis and verification tasks extremely runtime and memory inefficient, and at the same time, limits the extent to which these networks can be optimized. Given today?s commodity graphics processing units (GPUs) that can deliver more than 500 GFlops (Flops: floating point operations per second). computing power and 100GB/s memory bandwidth, which are more than 10X greater than offered by modern day general-purpose quad-core microprocessors, it is very desirable to convert the impressive GPU computing power to usable design automation tools for VLSI verification. In this dissertation, for the first time, we show how to exploit recent massively parallel single-instruction multiple-thread (SIMT) based graphics processing unit (GPU) platforms to tackle power grid analysis with very promising performance. Our GPU based network analyzer is capable of solving tens of millions of power grid nodes in just a few seconds. Additionally, with the above GPU based simulation framework, more challenging three-dimensional full-chip thermal analysis can be solved in a much more efficient way than ever before.

Page generated in 0.0879 seconds