• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 13
  • 9
  • 6
  • 5
  • 1
  • 1
  • Tagged with
  • 115
  • 88
  • 35
  • 31
  • 26
  • 26
  • 25
  • 18
  • 17
  • 16
  • 15
  • 13
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Performance evaluation and enhancement for AF two-way relaying in the presence of channel estimation error

Wang, Chenyuan 30 April 2012 (has links)
Cooperative relaying is a promising diversity achieving technique to provide reliable transmission, high throughput and extensive coverage for wireless networks in a variety of applications. Two-way relaying is a spectrally efficient protocol, providing one solution to overcome the half-duplex loss in one-way relay channels. Moreover, incorporating the multiple-input-multiple-output (MIMO) technology can further improve the spectral efficiency and diversity gain. A lot of related work has been performed on the two-way relay network (TWRN), but most of them assume perfect channel state information (CSI). In a realistic scenario, however, the channel is estimated and the estimation error exists. So in this thesis, we explicitly take into account the CSI error, and investigate its impact on the performance of amplify-and-forward (AF) TWRN where either multiple distributed single-antenna relays or a single multiple-antenna relay station is exploited. For the distributed relay network, we consider imperfect self-interference cancellation at both sources that exchange information with the help of multiple relays, and maximal ratio combining (MRC) is then applied to improve the decision statistics under imperfect signal detection. The system performance degradation in terms of outage probability and average bit-error rate (BER) are analyzed, as well as their asymptotic trend. To further improve the spectral efficiency while maintain the spatial diversity, we utilize the maximum minimum (Max-Min) relay selection (RS), and examine the impact of imperfect CSI on this single RS scheme. To mitigate the negative effect of imperfect CSI, we resort to adaptive power allocation (PA) by minimizing either the outage probability or the average BER, which can be cast as a Geometric Programming (GP) problem. Numerical results verify the correctness of our analysis and show that the adaptive PA scheme outperforms the equal PA scheme under the aggregated effect of imperfect CSI. When employing a single MIMO relay, the problem of robust MIMO relay design has been dealt with by considering the fact that only imperfect CSI is available. We design the MIMO relay based upon the CSI estimates, where the estimation errors are included to attain the robust design under the worst-case philosophy. The optimization problem corresponding to the robust MIMO relay design is shown to be nonconvex. This motivates the pursuit of semidefinite relaxation (SDR) coupled with the randomization technique to obtain computationally efficient high-quality approximate solutions. Numerical simulations compare the proposed MIMO relay with the existing nonrobust method, and therefore validate its robustness against the channel uncertainty. / Graduate
92

A Method for Optimised Allocation of System Architectures with Real-time Constraints

Marcus, Ventovaara, Arman, Hasanbegović January 2018 (has links)
Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications.In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest.This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints.The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account.Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture.This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting.The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.
93

A mathematical framework for designing and evaluating control strategies for water- & food-borne pathogens : a norovirus case study

McMenemy, Paul January 2017 (has links)
Norovirus (NoV) is a significant cause of gastroenteritis globally, and the consumption of oysters is frequently linked to outbreaks. Depuration is the principle means employed to reduce levels of potentially harmful agents or toxins in shellfish. The aim of this thesis is to construct mathematical models which can describe the depuration dynamics of water-borne pathogens, and specifically examine the dynamics of NoV during depuration for a population shellfish. Legislation is currently under consideration within the EU by the Directorate-General for Health and Consumers (DG SANCO) to limit the maximum level of NoV that consumers are exposed to via this route. Therefore it is important to the utility of the thesis that any models constructed should incorporate control measures which could be used to implement minimum NoV levels. Doing so allowed calculation of minimum depuration times that would be required to adhere to the control measures incorporated into the models. In addition to modelling the impact on pathogens during the depuration, we wished to gain some insight into how the variability, and not just the mean levels, of water-borne pathogens can be as important with respect to the length of depuration required to minimise any food safety risks to the consumer. This proved difficult in the absence of any data sets that can be used to calculate variability measures, as little data is currently available to inform these values for NoV. However, our modelling techniques were able to calculate an upper limit on the variability of water-borne pathogens that can be well approximated by lognormal distributions. Finally we construct a model which provided linkage between the depuration process and the accretion of pathogens by shellfish while still within farming waters. This model proposed that the pulses of untreated waste waters released by sewage treatment works due to high levels of rainfall would be transmitted into shellfish whilst filter-feeding.
94

Définition et réglage de correcteurs robustes d'ordre fractionnaire / Definition and tuning of robust fractional order controllers

Tenoutit, Mammar 01 July 2013 (has links)
Les applications du calcul fractionnaire en automatique se sont considérablement développées ces dernières années, surtout en commande robuste. Ce mémoire est une contribution à la commande robuste des systèmes d'ordre entier à l'aide d'un correcteur PID d'ordre fractionnaire.Le conventionnel régulateur PID, unanimement apprécié pour le contrôle des processus industriels, a été adapté au cas fractionnaire sous la forme PInDf grâce à l'introduction d'un modèle de référence d'ordre non entier, réputé pour sa robustesse vis-à-vis des variations du gain statique.Cette nouvelle structure a été étendue aux systèmes à retard sous la forme d'un Prédicteur de SMITH fractionnaire. Dans leur forme standard, ces correcteurs sont adaptés à la commande des systèmes du premier et du second ordre, avec ou sans retard pur.Pour des systèmes plus complexes, deux méthodologies de synthèse du correcteur ont été proposées, grâce à la méthode des moments et à l'approche retour de sortie.Pour les systèmes dont le modèle est obtenu à partir d'une identification, la boucle fermée doit en outre être robuste aux erreurs d'estimation. Un modèle pire-cas, déduit de la matrice de covariance de l'estimateur et des domaines d'incertitudes fréquentielles, a été proposé pour la synthèse du correcteur.Les différentes simulations numériques montrent l'efficacité de cette méthodologie pour l'obtention d'une boucle fermée robuste aux variations du gain statique et aux incertitudes d'identification. / The application of fractional calculus in automatic control have received much attention these last years, mainly in robust control. This PhD dissertation is a contribution to the control of integer order systems using a fractional order PID controller.The classical PID, well known for its applications to industrial plants, has been adapted to the fractional case as a PInDf controller, thanks to a fractional order reference model, characterized by its robustness to static gain variations.This new controller has been generalized to time delay systems as a fractional SMITH Predictor. In standard case, these controllers are adapted to first and second order systems, with or without a time delay. For more complex systems, two design methodologies have been proposed, based on the method of moments and on output feedback approach.For systems whose model is obtained by an identification procedure, the closed loop has to be robust to estimation errors. So, a worst-case model, derived from the covariance matrix of the estimator and the frequency uncertainty domains, has been proposed for the design of the controller.The different numerical simulations demonstrate that this methodology is able to provide robustness to static gain variations and to identification uncertainties.
95

Worst-case delay analysis of core-to-IO flows over many-cores architectures / Analyse des délais pire cas des flux entre coeur et interfaces entrées/sorties sur des architectures pluri-coeurs

Abdallah, Laure 05 April 2017 (has links)
Les architectures pluri-coeurs sont plus intéressantes pour concevoir des systèmes en temps réel que les systèmes multi-coeurs car il est possible de les maîtriser plus facilement et d’intégrer un plus grand nombre d’applications, potentiellement de différents niveau de criticité. Dans les systèmes temps réel embarqués, ces architectures peuvent être utilisées comme des éléments de traitement au sein d’un réseau fédérateur car ils fournissent un grand nombre d’interfaces Entrées/Sorties telles que les contrôleurs Ethernet et les interfaces de la mémoire DDR-SDRAM. Aussi, il est possible d’y allouer des applications ayant différents niveaux de criticités. Ces applications communiquent entre elles à travers le réseau sur puce (NoC) du pluri coeur et avec des capteurs et des actionneurs via l’interface Ethernet. Afin de garantir les contraintes temps réel de ces applications, les délais de transmission pire cas (WCTT) doivent être calculés pour les flux entre les coeurs ("inter-core") et les flux entre les coeurs et les interfaces entrées/sorties ("core-to-I/O"). Plusieurs réseaux sur puce (NoCs) ciblant les systèmes en temps réel dur ont été conçus en s’appuyant sur des extensions matérielles spécifiques. Cependant, aucune de ces extensions ne sont actuellement disponibles dans les architectures de réseaux sur puce commercialisés, qui se basent sur la commutation wormhole avec la stratégie d’arbitrage par tourniquet. En utilisant cette stratégie de commutation, différents types d’interférences peuvent se produire sur le réseau sur puce entre les flux. De plus, le placement de tâches des applications critiques et non critiques a un impact sur les contentions que peut subir les flux "core-to-I/O". Ces flux "core-to-I/O" parcourent deux réseaux de vitesses différentes: le NoC et Ethernet. Sur le NoC, la taille des paquets autorisés est beaucoup plus petite que la taille des trames Ethernet. Ainsi, lorsque la trame Ethernet est transmise sur le NoC, elle est divisée en plusieurs paquets. La trame sera supprimée de la mémoire tampon de l’interface Ethernet uniquement lorsque la totalité des données aura été transmise. Malheureusement, la congestion du NoC ajoute des délais supplémentaires à la transmission des paquets et la taille de la mémoire tampon de l’interface Ethernet est limitée. En conséquence, ce comportement peut aboutir au rejet des trames Ethernet. L’idée donc est de pouvoir analyser les délais de transmission pire cas sur les NoC et de réduire leurs délais afin d’éviter ce problème de rejet. Dans cette thèse, nous montrons que le pessimisme de méthodes existantes de calcul de WCTT et les stratégies de placements existantes conduisent à rejeter des trames Ethernet en raison d’une congestion interne sur le NoC. Des propriétés des réseaux utilisant la commutation "wormhole" ont été définies et validées afin de mieux prendre en compte les conflits entre les flux. Une stratégie de placement de tâches qui prend en compte les communications avec les I/O a été ensuite proposée. Cette stratégie vise à diminuer les contentions des flux qui proviennent de l’I/O et donc de réduire leurs WCTTs. Les résultats obtenus par la méthode de calcul définie au cours de cette thèse montrent que les valeurs du WCTT des flux peuvent être réduites jusqu’à 50% par rapport aux valeurs de WCTT obtenues par les méthodes de calcul existantes. En outre, les résultats expérimentaux sur des applications avioniques réelles montrent des améliorations significatives des délais de transmission des flux "core-to-I/O", jusqu’à 94%, sans impact significatif sur ceux des flux "intercore". Ces améliorations sont dues à la stratégie d’allocation définie qui place les applications de manière à réduire l’impact des flux non critiques sur les flux critiques. Ces réductions de WCTT des flux "core-to-I/O" évitent le rejet des trames Ethernet. / Many-core architectures are more promising hardware to design real-time systems than multi-core systems as they should enable an easier mastered integration of a higher number of applications, potentially of different level of criticalities. In embedded real-time systems, these architectures will be integrated within backbone Ethernet networks, as they mostly provide Ethernet controllers as Input/Output(I/O) interfaces. Thus, a number of applications of different level of criticalities could be allocated on the Network-on-Chip (NoC) and required to communicate with sensors and actuators. However, the worst-case behavior of NoC for both inter-core and core-to-I/O communications must be established. Several NoCs targeting hard real-time systems, made of specific hardware extensions, have been designed. However, none of these extensions are currently available in commercially available NoC-based many-core architectures, that instead rely on wormhole switching with round-robin arbitration. Using this switching strategy, interference patterns can occur between direct and indirect flows on many-cores. Besides, the mapping over the NoC of both critical and non-critical applications has an impact on the network contention these core-to-I/O communications exhibit. These core-to-I/O flows (coming from the Ethernet interface of the NoC) cross two networks of different speeds: NoC and Ethernet. On the NoC, the size of allowed packets is much smaller than the size of Ethernet frames. Thus, once an Ethernet frame is transmitted over the NoC, it will be divided into many packets. When all the data corresponding to this frame are received by the DDR-SDRAM memory on the NoC, the frame is removed from the buffer of the Ethernet interface. In addition, the congestion on the NoC, due to wormhole switching, can delay these flows. Besides, the buffer in the Ethernet interface has a limited capacity. Then, this behavior may lead to a problem of dropping Ethernet frames. The idea is therefore to analyze the worst case transmission delays on the NoC and reduce the delays of the core-to-I/O flows. In this thesis, we show that the pessimism of the existing Worst-Case Traversal Time (WCTT) computing methods and the existing mapping strategies lead to drop Ethernet frames due to an internal congestion in the NoC. Thus, we demonstrate properties of such NoC-based wormhole networks to reduce the pessimism when modeling flows in contentions. Then, we propose a mapping strategy that minimizes the contention of core-to-I/O flows in order to solve this problem. We show that the WCTT values can be reduced up to 50% compared to current state-of-the-art real-time packet schedulability analysis. These results are due to the modeling of the real impact of the flows in contention in our proposed computing method. Besides, experimental results on real avionics applications show significant improvements of core-to-I/O flows transmission delays, up to 94%, without significantly impacting transmission delays of core-to-core flows. These improvements are due to our mapping strategy that allocates the applications in such a way to reduce the impact of non-critical flows on critical flows. These reductions on the WCTT of the core-to-I/O flows avoid the drop of Ethernet frames.
96

Analyse temporelle des systèmes temps-réels sur architectures pluri-coeurs / Many-Core Timing Analysis of Real-Time Systems

Rihani, Hamza 01 December 2017 (has links)
La prédictibilité est un aspect important des systèmes temps-réel critiques. Garantir la fonctionnalité de ces systèmespasse par la prise en compte des contraintes temporelles. Les architectures mono-cœurs traditionnelles ne sont plussuffisantes pour répondre aux besoins croissants en performance de ces systèmes. De nouvelles architectures multi-cœurssont conçues pour offrir plus de performance mais introduisent d'autres défis. Dans cette thèse, nous nous intéressonsau problème d’accès aux ressources partagées dans un environnement multi-cœur.La première partie de ce travail propose une approche qui considère la modélisation de programme avec des formules desatisfiabilité modulo des théories (SMT). On utilise un solveur SMT pour trouverun chemin d’exécution qui maximise le temps d’exécution. On considère comme ressource partagée un bus utilisant unepolitique d’accès multiple à répartition dans le temps (TDMA). On explique comment la sémantique du programme analyséet le bus partagé peuvent être modélisés en SMT. Les résultats expérimentaux montrent une meilleure précision encomparaison à des approches simples et pessimistes.Dans la deuxième partie, nous proposons une analyse de temps de réponse de programmes à flot de données synchroness'exécutant sur un processeur pluri-cœur. Notre approche calcule l'ensemble des dates de début d'exécution et des tempsde réponse en respectant la contrainte de dépendance entre les tâches. Ce travail est appliqué au processeur pluri-cœurindustriel Kalray MPPA-256. Nous proposons un modèle mathématique de l'arbitre de bus implémenté sur le processeur. Deplus, l'analyse de l'interférence sur le bus est raffinée en prenant en compte : (i) les temps de réponseet les dates de début des tâches concurrentes, (ii) le modèle d'exécution, (iii) les bancsmémoires, (iv) le pipeline des accès à la mémoire. L'évaluation expérimentale est réalisé sur desexemples générés aléatoirement et sur un cas d'étude d'un contrôleur de vol. / Predictability is of paramount importance in real-time and safety-critical systems, where non-functional properties --such as the timing behavior -- have high impact on the system's correctness. As many safety-critical systems have agrowing performance demand, classical architectures, such as single-cores, are not sufficient anymore. One increasinglypopular solution is the use of multi-core systems, even in the real-time domain. Recent many-core architectures, such asthe Kalray MPPA, were designed to take advantage of the performance benefits of a multi-core architecture whileoffering certain predictability. It is still hard, however, to predict the execution time due to interferences on sharedresources (e.g., bus, memory, etc.).To tackle this challenge, Time Division Multiple Access (TDMA) buses are often advocated. In the first part of thisthesis, we are interested in the timing analysis of accesses to shared resources in such environments. Our approach usesSatisfiability Modulo Theory (SMT) to encode the semantics and the execution time of the analyzed program. To estimatethe delays of shared resource accesses, we propose an SMT model of a shared TDMA bus. An SMT-solver is used to find asolution that corresponds to the execution path with the maximal execution time. Using examples, we show how theworst-case execution time estimation is enhanced by combining the semantics and the shared bus analysis in SMT.In the second part, we introduce a response time analysis technique for Synchronous Data Flow programs. These are mappedto multiple parallel dependent tasks running on a compute cluster of the Kalray MPPA-256 many-core processor. Theanalysis we devise computes a set of response times and release dates that respect the constraints in the taskdependency graph. We derive a mathematical model of the multi-level bus arbitration policy used by the MPPA. Further,we refine the analysis to account for (i) release dates and response times of co-runners, (ii)task execution models, (iii) use of memory banks, (iv) memory accesses pipelining. Furtherimprovements to the precision of the analysis were achieved by considering only accesses that block the emitting core inthe interference analysis. Our experimental evaluation focuses on randomly generated benchmarks and an avionics casestudy.
97

Static analysis of program by Abstract Interpretation and Decision Procedures / Analyse statique par interprétation abstraite et procédures de décision

Henry, Julien 13 October 2014 (has links)
L'analyse statique de programme a pour but de prouver automatiquement qu'un programme vérifie certaines propriétés. L'interprétation abstraite est un cadre théorique permettant de calculer des invariants de programme. Ces invariants sont des propriétés sur les variables du programme vraies pour toute exécution. La précision des invariants calculés dépend de nombreux paramètres, en particulier du domaine abstrait et de l'ordre d'itération utilisés pendant le calcul d'invariants. Dans cette thèse, nous proposons plusieurs extensions de cette méthode qui améliorent la précision de l'analyse.Habituellement, l'interprétation abstraite consiste en un calcul de point fixe d'un opérateur obtenu après convergence d'une séquence ascendante, utilisant un opérateur appelé élargissement. Le point fixe obtenu est alors un invariant. Il est ensuite possible d'améliorer cet invariant via une séquence descendante sans élargissement. Nous proposons une méthode pour améliorer un point fixe après la séquence descendante, en recommençant une nouvelle séquence depuis une valeur initiale choisie judiscieusement. L'interprétation abstraite peut égalementêtre rendue plus précise en distinguant tous les chemins d'exécution du programme, au prix d'une explosion exponentielle de la complexité. Le problème de satisfiabilité modulo théorie (SMT), dont les techniques de résolution ont été grandement améliorée cette décennie, permettent de représenter ces ensembles de chemins implicitement. Nous proposons d'utiliser cette représentation implicite à base de SMT et de les appliquer à des ordres d'itération de l'état de l'art pour obtenir des analyses plus précises.Nous proposons ensuite de coupler SMT et interprétation abstraite au sein de nouveaux algorithmes appelés Modular Path Focusing et Property-Guided Path Focusing, qui calculent des résumés de boucles et de fonctions de façon modulaire, guidés par des traces d'erreur. Notre technique a différents usages: elle permet de montrer qu'un état d'erreur est inatteignable, mais également d'inférer des préconditions aux boucles et aux fonctions.Nous appliquons nos méthodes d'analyse statique à l'estimation du temps d'exécution pire cas (WCET). Dans un premier temps, nous présentons la façon d'exprimer ce problème via optimisation modulo théorie, et pourquoi un encodage naturel du problème en SMT génère des formules trop difficiles pour l'ensemble des solveurs actuels. Nous proposons un moyen simple et efficace de réduire considérablement le temps de calcul des solveurs SMT en ajoutant aux formules certaines propriétés impliquées obtenues par analyse statique. Enfin, nous présentons l'implémentation de Pagai, un nouvel analyseur statique pour LLVM, qui calcule des invariants numériques grâce aux différentes méthodes décrites dans cette thèse. Nous avons comparé les différentes techniques implémentées sur des programmes open-source et des benchmarks utilisés par la communauté. / Static program analysis aims at automatically determining whether a program satisfies some particular properties. For this purpose, abstract interpretation is a framework that enables the computation of invariants, i.e. properties on the variables that always hold for any program execution. The precision of these invariants depends on many parameters, in particular the abstract domain, and the iteration strategy for computing these invariants. In this thesis, we propose several improvements on the abstract interpretation framework that enhance the overall precision of the analysis.Usually, abstract interpretation consists in computing an ascending sequence with widening, which converges towards a fixpoint which is a program invariant; then computing a descending sequence of correct solutions without widening. We describe and experiment with a method to improve a fixpoint after its computation, by starting again a new ascending/descending sequence with a smarter starting value. Abstract interpretation can also be made more precise by distinguishing paths inside loops, at the expense of possibly exponential complexity. Satisfiability modulo theories (SMT), whose efficiency has been considerably improved in the last decade, allows sparse representations of paths and sets of paths. We propose to combine this SMT representation of paths with various state-of-the-art iteration strategies to further improve the overall precision of the analysis.We propose a second coupling between abstract interpretation and SMT in a program verification framework called Modular Path Focusing, that computes function and loop summaries by abstract interpretation in a modular fashion, guided by error paths obtained with SMT. Our framework can be used for various purposes: it can prove the unreachability of certain error program states, but can also synthesize function/loop preconditions for which these error states are unreachable.We then describe an application of static analysis and SMT to the estimation of program worst-case execution time (WCET). We first present how to express WCET as an optimization modulo theory problem, and show that natural encodings into SMT yield formulas intractable for all current production-grade solvers. We propose an efficient way to considerably reduce the computation time of the SMT-solvers by conjoining to the formulas well chosen summaries of program portions obtained by static analysis.We finally describe the design and the implementation of Pagai,a new static analyzer working over the LLVM compiler infrastructure,which computes numerical inductive invariants using the various techniques described in this thesis.Because of the non-monotonicity of the results of abstract interpretation with widening operators, it is difficult to conclude that some abstraction is more precise than another based on theoretical local precision results. We thus conducted extensive comparisons between our new techniques and previous ones, on a variety of open-source packages and benchmarks used in the community.
98

Algorithmique du Network Calculus / Network Calculus Algoritmics

Jouhet, Laurent 07 November 2012 (has links)
Le Network Calculus est une théorie visant à calculer des bornes pire-cas sur les performances des réseaux de communication. Le réseau est modélisé par un graphe orienté où les noeuds représentent des serveurs, et les flux traversant le réseau doivent suivre les arcs. S'ajoutent à cela des contraintes sur les courbes de trafic (la quantité de données passées par un point depuis la mise en route du réseau) et sur les courbes de service (la quantité de travail fournie par chaque serveur). Pour borner les performances pire-cas, comme la charge en différents points ou les délais de bout en bout, ces enveloppes sont combinées à l'aide d'opérateurs issus notamment des algèbres tropicales : min, +, convolution-(min, +)... Cette thèse est centrée sur l'algorithmique du Network Calculus, à savoir comment rendre effectif ce formalisme. Ce travail nous a amené d'abord à comparer les variations présentes dans la littérature sur les modèles utilisés, révélant des équivalences d'expressivité comme entre le Real-Time Calculus et le Network Calculus. Dans un deuxième temps, nous avons proposé un nouvel opérateur (min, +) pour traiter le calcul de performances en présence d'agrégation de flux, et nous avons étudié le cas des réseaux sans dépendances cycliques sur les flux et avec politique de service quelconque. Nous avons montré la difficulté algorithmique d'obtenir précisément les pires cas, mais nous avons aussi fourni une nouvelle heuristique pour les calculer. Elle s'avère de complexité polynomiale dans des cas intéressants. / Network Calculus is a theory aiming at computing worst-case bounds on performances in communication networks. The network is usually modelled by a digraph : the servers are located on the nodes and the flows must follow path in the digraph. There are constraints on the trafic curves (how much data have been through a given point since the activation of the network) and on the service curves (how much work each server may provide). To derive bounds on the worst-case performances, as the backlog or the end-to-end delay, these envelopes are combined thanks to tropical algebra operators: min, +, convolution... This thesis focuses on Network Calculus algorithmics, that is how effective is this formalism. This work led us to compare various models in the litterature, and to show expressiveness equivalence between Real-Time Calculus and Network Calculus. Then, we suggested a new (min, +) operator to compute performances bounds in networks with agregated flows and we studied feed-forward networks under blind multiplexing. We showed the difficulty to compute these bounds, but we gave an heuristic, which is polynomial for interesting cases.
99

Projeto e análise de aplicações de circuladores ativos para a operação em frequências de ultrassom Doppler de ondas contínuas / Design and application analysis of active circulators for operation in frequencies of continuous-wave Doppler ultrasound

Tales Roberto de Souza Santini 11 July 2014 (has links)
Os circuladores tradicionais são amplamente utilizados em telecomunicações e defesa militar para o simultâneo envio e recepção de sinais por um único meio. Esses circuitos passivos, fabricados a partir de materiais ferromagnéticos, possuem a desvantagem do aumento de dimensões, peso e custos de fabricação com a diminuição da frequência de operação definida no projeto destes dispositivos, inviabilizando sua aplicação em frequências abaixo de 500 MHz. O circulador ativo surgiu como uma alternativa aos tradicionais, tendo aplicações em frequências desde o nível DC até a ordem de dezenas de gigahertz. As suas maiores aplicações ocorrem quando são necessários dispositivos compactos, de baixo custo e de baixa potência. Os primeiros circuitos propostos possuíam uma grande limitação em termos de frequência de operação e de potência entregue à carga. Entretanto, com os avanços tecnológicos na eletrônica, tais problemas podem ser amenizados atualmente. Neste trabalho é apresentado o desenvolvimento de um circuito circulador ativo para a utilização em instrumentação eletrônica, em particular para a operação em frequências na ordem das utilizadas em equipamentos de ultrassom Doppler de ondas contínuas, na faixa de 2 MHz a 10 MHz. As possíveis vantagens da implementação de circuladores em sistemas de ultrassom estão relacionadas ao incremento da relação sinal-ruído, aumento da área de recepção do transdutor, simplificação da construção do transdutor, simplificação do circuito de demodulação/ processamento, e maior isolação entre os circuitos de transmissão e recepção de sinais. Na fase inicial, o circulador ativo proposto é modelado por equacionamento, utilizando-se tanto o modelo ideal dos amplificadores operacionais como o seu modelo de resposta em frequência. Simulações computacionais foram executadas para confirmar a validade do equacionamento. Um circuito montado em placa de prototipagem rápida foi apresentado, e testes de prova de conceito em baixas frequências foram realizados, mostrando uma grande semelhança entre o teórico, o simulado e o experimental. A segunda parte contou com o projeto do circuito circulador para a operação em maiores frequências. O circuito proposto é composto por três amplificadores operacionais de realimentação por corrente e vários componentes passivos. Uma análise de sensibilidade utilizando os métodos de Monte-Carlo e análise do pior caso foi aplicada, resultando em um perfil de comportamento frente às variações dos componentes do circuito e às variações da impedância de carga. Uma placa de circuito impressa foi projetada, utilizando-se de boas práticas de leiaute para a operação em altas frequências. Neste circuito montado, foram realizados os seguintes testes e medições: comportamento no domínio do tempo, faixa dinâmica, nível de isolação em relação à amplitude do sinal, largura de banda, levantamento dos parâmetros de espalhamento, e envio e recepção de sinais por transdutor de ultrassom Doppler de ondas contínuas. Os resultados dos testes de desempenho foram satisfatórios, apresentando uma banda de transmissão de sinais para frequências de 100 MHz, isolação entre portas não consecutivas de 39 dB na frequência de interesse para ultrassom Doppler e isolação maior que 20 dB para frequências de até 35 MHz. A faixa dinâmica excedeu a tensão de 5 Vpp, e o circuito teve bom comportamento no envio e na recepção simultânea de sinais pelo transdutor de ultrassom. / Traditional circulators are widely used in both telecommunications and military defense for sending and receiving signals simultaneously through a single medium. These passive circuits which are manufactured from ferromagnetic materials, have the disadvantages of having suffered an increase in dimensions, weight, and manufacturing costs along with the decrease in the operation frequency established in the designs of such devices, thus preventing their useful employment in frequencies below 500 MHz. The active circulator emerged as an alternative to the traditional ones, and has applications on frequencies ranging from a DC level to levels involving dozens of gigahertz. It is applicable when compact devices are made necessary, at a low cost, and for low frequencies. The first circuits to be introduced had a major limitation in terms of operating frequency and power delivered to the load. However, due to technological advances in electronics, problems such as the aforementioned can now be minimized. This research work presents the development of an active circulator circuit to be used in electronic instrumentation, particularly for operation at frequencies such as those used in continuous wave Doppler ultrasound equipment, ranging from 2 MHz to 10 MHz. The advantages made possible by implementing ultrasound systems with circulators are related to an increase in the signal-to-noise ratio, an increase in the transducers reception area, a simplified construction of the transducer, simplification of the demodulation/processing circuit, and a greater isolation between the transmission circuits and signal reception. In the initial phase, the proposed active circulator was modeled by means of an equating method, using both the ideal model of operational amplifiers and the model of frequency response. Computer simulations were carried out in order to confirm the validity of the equating method. A circuit mounted upon a breadboard was introduced and proof of concept assessments were performed at low frequencies, showing a great similarity among the theoretical, simulated and experimented data. The second phase is when the circulator circuits design was developed in order make its operation at higher frequencies possible. The proposed circuit is comprised of three currentfeedback operational amplifiers and several passive components. A sensitivity analysis was carried out using Monte-Carlo methods and worst-case analyses, resulting in a certain behavioral profile influenced by variations in circuit components and variations in load impedance. A printed circuit board was designed, employing good practice layout standards so that operation at high frequencies would be achieved. The following evaluations and measurements were performed on the circuit that was assembled: time domain behavior, dynamic range, isolation level relative to signal amplitude, bandwidth, survey of the scattering parameters, and transmission and reception of signals by a continuous wave Doppler ultrasound transducer. The results of the performance tests were satisfactory, presenting a 100 MHz signal transmission band, isolation between non-consecutive ports of 39 dB at the frequency of interest to the Doppler ultrasound, and an isolation greater than 20 dB for frequencies of up to 35 MHz. The dynamic range exceeded the 5Vpp and the circuit performed satisfactorily in the simultaneous transmission and reception of signals through the ultrasound\'s transducer.
100

Cache Prediction and Execution Time Analysis on Real-Time MPSoC

Neikter, Carl-Fredrik January 2008 (has links)
Real-time systems do not only require that the logical operations are correct. Equally important is that the specified time constraints always are complied. This has successfully been studied before for mono-processor systems. However, as the hardware in the systems gets more complex, the previous approaches become invalidated. For example, multi-processor systems-on-chip (MPSoC) get more and more common every day, and together with a shared memory, the bus access time is unpredictable in nature. This has recently been resolved, but a safe and not too pessimistic cache analysis approach for MPSoC has not been investigated before. This thesis has resulted in designed and implemented algorithms for cache analysis on real-time MPSoC with a shared communication infrastructure. An additional advantage is that the algorithms include improvements compared to previous approaches for mono-processor systems. The verification of these algorithms has been performed with the help of data flow analysis theory. Furthermore, it is not known how different types of cache miss characteristic of a task influence the worst case execution time on MPSoC. Therefore, a program that generates randomized tasks, according to different parameters, has been constructed. The parameters can, for example, influence the complexity of the control flow graph and average distance between the cache misses.

Page generated in 0.0276 seconds