• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 10
  • 7
  • 4
  • 4
  • 1
  • Tagged with
  • 88
  • 88
  • 35
  • 31
  • 26
  • 25
  • 24
  • 17
  • 17
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Scalable Trajectory Approach for ensuring deterministic guarantees in large networks / Passage à l'échelle de l'approche par trajectoire dans de larges réseaux

Medlej, Sara 26 September 2013 (has links)
Tout comportement défectueux d’un système temps-réel critique, comme celui utilisé dans le réseau avionique ou le secteur nucléaire, peut mettre en danger des vies. Par conséquent, la vérification et validation de ces systèmes est indispensable avant leurs déploiements. En fait, les autorités de sécurité demandent d’assurer des garanties déterministes. Dans cette thèse, nous nous intéressons à obtenir des garanties temporelles, en particulier nous avons besoin de prouver que le temps de réponse de bout-en-bout de chaque flux présent dans le réseau est borné. Ce sujet a été abordé durant de nombreuses années et plusieurs approches ont été développées. Après une brève comparaison entre les différentes approches existantes, une semble être un bon candidat. Elle s’appelle l’approche par trajectoire; cette méthode utilise les résultats établis par la théorie de l'ordonnancement afin de calculer une limite supérieure. En réalité, la surestimation de la borne calculée peut entrainer la rejection de certification du réseau. Ainsi une première partie du travail consiste à détecter les sources de pessimisme de l’approche adoptée. Dans le cadre d’un ordonnancement FIFO, les termes ajoutant du pessimisme à la borne calculée ont été identifiés. Cependant, comme les autres méthodes, l’approche par trajectoire souffre du problème de passage à l’échelle. En fait, l’approche doit être appliquée sur un réseau composé d’une centaine de commutateur et d’un nombre de flux qui dépasse les milliers. Ainsi, il est important qu’elle soit en mesure d'offrir des résultats dans un délai acceptable. La première étape consiste à identifier, dans le cas d’un ordonnancement FIFO, les termes conduisant à un temps de calcul important. L'analyse montre que la complexité du calcul est due à un processus récursif et itératif. Ensuite, en se basant toujours sur l’approche par trajectoire, nous proposons de calculer une limite supérieure dans un intervalle de temps réduit et sans perte significative de précision. C'est ce qu'on appelle l'approche par trajectoire scalable. Un outil a été développé permettant de comparer les résultats obtenus par l’approche par trajectoire et notre proposition. Après application sur un réseau de taille réduite (composé de 10 commutateurs), les résultats de simulations montrent que la durée totale nécessaire pour calculer les bornes des milles flux a été réduite de plusieurs jours à une dizaine de secondes. / In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method’s scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds.
32

Precise Analysis of Private And Shared Caches for Tight WCET Estimates

Nagar, Kartik January 2016 (has links) (PDF)
Worst Case Execution Time (WCET) is an important metric for programs running on real-time systems, and finding precise estimates of a program’s WCET is crucial to avoid over-allocation and wastage of hardware resources and to improve the schedulability of task sets. Hardware Caches have a major impact on a program’s execution time, and accurate estimation of a program’s cache behavior generally leads to significant reduction of its estimated WCET. However, the cache behavior of an access cannot be determined in isolation, since it depends on the access history, and in multi-path programs, the sequence of accesses made to the cache is not fixed. Hence, the same access can exhibit different cache behavior in different execution instances. This issue is further exacerbated in shared caches in a multi-core architecture, where interfering accesses from co-running programs on other cores can arrive at any time and modify the cache state. Further, cache analysis aimed towards WCET estimation should be provably safe, in that the estimated WCET should always exceed the actual execution time across all execution instances. Faced with such contradicting requirements, previous approaches to cache analysis try to find memory accesses in a program which are guaranteed to hit the cache, irrespective of the program input, or the interferences from other co-running programs in case of a shared cache. To do so, they find the worst-case cache behavior for every individual memory access, analyzing the program (and interferences to a shared cache) to find whether there are execution instances where an access can super a cache miss. However, this approach loses out in making more precise predictions of private cache behavior which can be safely used for WCET estimation, and is significantly imprecise for shared cache analysis, where it is often impossible to guarantee that an access always hits the cache. In this work, we take a fundamentally different approach to cache analysis, by (1) trying to find worst-case behavior of groups of cache accesses, and (2) trying to find the exact cache behavior in the worst-case program execution instance, which is the execution instance with the maximum execution time. For shared caches, we propose the Worst Case Interference Placement (WCIP) technique, which finds the worst-case timing of interfering accesses that would cause the maximum number of cache misses on the worst case execution path of the program. We first use Integer Linear Programming (ILP) to find an exact solution to the WCIP problem. However, this approach does not scale well for large programs, and so we investigate the WCIP problem in detail and prove that it is NP-Hard. In the process, we discover that the source of hardness of the WCIP problem lies in finding the worst case execution path which would exhibit the maximum execution time in the presence of interferences. We use this observation to propose an approximate algorithm for performing WCIP, which bypasses the hard problem of finding the worst case execution path by simply assuming that all cache accesses made by the program occur on a single path. This allows us to use a simple greedy algorithm to distribute the interfering accesses by choosing those cache accesses which could be most affected by interferences. The greedy algorithm also guarantees that the increase in WCET due to interferences is linear in the number of interferences. Experimentally, we show that WCIP provides substantial precision improvement in the final WCET over previous approaches to shared cache analysis, and the approximate algorithm almost matches the precision of the ILP-based approach, while being considerably faster. For private caches, we discover multiple scenarios where hit-miss predictions made by traditional Abstract Interpretation-based approaches are not sufficient to fully capture cache behavior for WCET estimation. We introduce the concept of cache miss paths, which are abstractions of program path along which an access can super a cache miss. We propose an ILP-based approach which uses cache miss paths to find the exact cache behavior in the worst-case execution instance of the program. However, the ILP-based approach needs information about the worst-case execution path to predict the cache behavior, and hence it is difficult to integrate it with other micro-architectural analysis. We then show that most of the precision improvement of the ILP-based approach can be recovered without any knowledge of the worst-case execution path, by a careful analysis of the cache miss paths themselves. In particular, we can use cache miss paths to find the worst-case behavior of groups of cache accesses. Further, we can find upper bounds on the maximum number of times that cache accesses inside loops can exhibit worst-case behavior. This results in a scalable, precise method for performing private cache analysis which can be easily integrated with other micro-architectural analysis.
33

From Theory to Implementation of Embedded Control Applications : A Case Study

Fize, Florian January 2016 (has links)
Control applications are used in almost all scientific domains and are subject to timing constraints. Moreover, different applications can run on the same platform which leads to even more complex timing behaviors. However, some of the timing issues are not always considered in the implementation of such applications, and this can make the system fail. In this thesis, the timing issues are considered, i.e., the problem of non-constant delay in the control of an inverted pendulum with a real-time kernel running on an ATmega328p micro-controller. The study shows that control performance is affected by this problem. In addition, the thesis, reports the adaptation of an existing real-time kernel based on an EDF (Earliest Deadline First) scheduling policy, to the architecture of the ATmega328p. Moreover, the new approach of a server-based kernel is implemented in this thesis, still on the same Atmel micro-controller.
34

Analyse pire cas de flux hétérogènes dans un réseau embarqué avion / Heterogeneous flows worst case analysis in avionics embedded networks

Bauer, Henri 04 October 2011 (has links)
La certification des réseaux avioniques requiert une maîtrise des délais de transmission des données. Cepednant, le multiplexage et le partage des ressource de communications dans des réseaux tels que l'AFDX (Avionics Full Duplex Switched Ethernet) rendent difficile le calcul d'un délai de bout en bout pire cas pour chaque flux. Des outils comme le calcul réseau fournissent une borne supérieure (pessimiste) de ce délai pire cas. Les besoins de communication des avions civils modernes ne cessent d'augmenter et un nombre croissant de flux aux contraintes et aux caractéristiques différentes doivent partager les ressources existantes. Le réseau AFDX actuel ne permet pas de différentier plusieurs classes de trafic : les messages sont traités dans les files des commutateurs selon leur ordre d'arrivée (politique de service FIFO). L'objet de cette thèse est de montrer qu'il est possible de calculer des bornes pire cas des délais de bout en bout avec des politiques de service plus évoluées, à base de priorités statiques (Priority Queueing) ou à répartition équitable de service (Fair Queueing). Nous montrons comment l'approche par trajectoires, issue de la théorie de l'ordonnancement dans des systèmes asynchrones distribués peut s'appliquer au domaine de l'AFDX actuel et futur (intégration de politiques de service plus évoluées permettant la différentiation de flux). Nous comparons les performances de cette approche avec les outils de référence lorsque cela est possible et étudions le pessimisme des bornes ainsi obtenues. / The certification process for avionics network requires guaranties on data transmission delays. However, calculating the worst case delay can be complex in the case of industrial AFDX (Avionics Full Duplex Switched Ethernet) networks. Tools such as Network Calculus provide a pessimistic upper bound of this worst case delay. Communication needs of modern commercial aircraft are expanding and a growing number of flows with various constraints and characteristics must share already existing resources. Currently deployed AFDX networks do not differentiate multiple classes of traffic: messages are processed in their arrival order in the output ports of the switches (FIFO servicing policy). The purpose of this thesis is to show that it is possible to provide upper bounds of end to end transmission delays in networks that implement more advanced servicing policies, based on static priorities (Priority Queuing) or on fairness (Fair Queuing). We show how the trajectory approach, based on scheduling theory in asynchronous distributed systems can be applied to current and future AFDX networks (supporting advanced servicing policies with flow differentiation capabilities). We compare the performance of this approach with the reference tools whenever it is possible and we study the pessimism of the computed upper bounds.
35

Certified Compilation and Worst-Case Execution Time Estimation / Compilation formellement vérifiée et estimation du pire temps d'éxécution

Maroneze, André Oliveira 17 June 2014 (has links)
Les systèmes informatiques critiques - tels que les commandes de vol électroniques et le contrôle des centrales nucléaires - doivent répondre à des exigences strictes en termes de sûreté de fonctionnement. Nous nous intéressons ici à l'application de méthodes formelles - ancrées sur de solides bases mathématiques - pour la vérification du comportement des logiciels critiques. Plus particulièrement, nous spécifions formellement nos algorithmes et nous les prouvons corrects, à l'aide de l'assistant à la preuve Coq - un logiciel qui vérifie mécaniquement la correction des preuves effectuées et qui apporte un degré de confiance très élevé. Nous appliquons ici des méthodes formelles à l'estimation du Temps d'Exécution au Pire Cas (plus connu par son abréviation en anglais, WCET) de programmes C. Le WCET est une propriété importante pour la sûreté de fonctionnement des systèmes critiques, mais son estimation exige des analyses sophistiquées. Pour garantir l'absence d'erreurs lors de ces analyses, nous avons formellement vérifié une méthode d'estimation du WCET fondée sur la combinaison de deux techniques principales: une estimation de bornes de boucles et une estimation du WCET via la méthode IPET (Implicit Path Enumeration Technique). L'estimation de bornes de boucles est elle-même décomposée en trois étapes : un découpage de programmes, une analyse de valeurs opérant par interprétation abstraite, et une méthode de calcul de bornes. Chacune de ces étapes est formellement vérifiée dans un chapitre qui lui est dédiée. Le développement a été intégré au compilateur C formellement vérifié CompCert. Nous prouvons que le résultat de l'estimation est correct et nous évaluons ses performances dans des ensembles de benchmarks de référence dans le domaine. Les contributions de cette thèse incluent la formalisation des techniques utilisées pour estimer le WCET, l'outil d'estimation lui-même (obtenu à partir de la formalisation), et l'évaluation expérimentale des résultats. Nous concluons que le développement fondé sur les méthodes formelles permet d'obtenir des résultats intéressants en termes de précision, mais il exige des précautions particulières pour s'assurer que l'effort de preuve reste maîtrisable. Le développement en parallèle des spécifications et des preuves est essentiel à cette fin. Les travaux futurs incluent la formalisation de modèles de coût matériel, ainsi que le développement d'analyses plus sophistiquées pour augmenter la précision du WCET estimé. / Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
36

Estudo do pior caso na validação de limpeza de equipamentos de produção de radiofármacos de reagentes liofilizados. Validação de metodologia de carbono orgânico total / Worst-case study for cleaning validation of equipments in the radiopharmaceutical production of lyophilized reagents. Metodology validation of total organic carbon.

Porto, Luciana Valeria Ferrari Machado 18 December 2015 (has links)
Os radiofármacos são definidos como preparações farmacêuticas contendo um radionuclídeo em sua composição, são administrados intravenosamente em sua maioria, e, portanto, o cumprimento dos princípios de Boas Práticas de Fabricação (BPF) é essencial e indispensável à tais produtos. A validação de limpeza é um requisito das BPF e consiste na evidência documentada que demonstra que os procedimentos de limpeza removem os resíduos a níveis pré-determinados de aceitação, garantindo que não haja contaminação cruzada. Uma simplificação da validação dos processos de limpeza é admitida, e consiste na escolha de um produto, denominado de \"pior caso\" ou worst case, para representar a limpeza de todos os equipamentos da mesma linha de produção. Uma das etapas da validação de limpeza é o estabelecimento e validação do método analítico para quantificação do resíduo. O objetivo deste estudo foi estabelecer o pior caso para a validação de limpeza dos equipamentos de produção de reagentes liofilizados-RL para marcação com 99mTc, avaliar a utilização do teor de carbono orgânico total (COT) como indicador de limpeza dos equipamentos utilizados na fabricação dos RL, validar o método para determinação de CONP (carbono orgânico não purgável/volátil) e realizar testes de recuperação com o produto escolhido como pior caso. A escolha do produto pior caso baseou-se no cálculo de um índice denominado \"índice para pior caso - Worst Case Index (WCI)\", utilizando informações de solubilidade dos fármacos, dificuldade de limpeza dos equipamentos e taxa de ocupação dos produtos na linha de produção. O produto indicado como pior caso entre os RL foi o MIBI-TEC. Os ensaios de validação do método foram realizados utilizando-se um analisador de carbono modelo TOC-Vwp acoplado a um amostrador automático modelo ASI-V, ambos da marca Shimadzu&reg e controlados por software TOC Control-V Shimadzu&reg. Foi utilizado o método direto de quantificação do CONP. Os parâmetros avaliados na validação do método foram: conformidade do sistema, robustez, linearidade, limites de detecção (LD) e de quantificação (LQ), precisão (repetibilidade e precisão intermediária), e exatidão (recuperação) e foram definidos como: 4% acidificante, 2,5 mL de oxidante, tempo de integração da curva de 4,5 minutos, tempo de sparge de 3,0 minutos e linearidade na faixa de 40-1000 μgL-1, com coeficiente de correlação (r) e soma residual dos mínimos quadrados (r2) > 0,99 respectivamente. LD e LQ para CONP foram 14,25 ppb e 47,52 ppb, respectivamente, repetibilidade entre 0,11 4,47%; a precisão intermediária entre 0,59 a 3,80% e exatidão entre 97,05 - 102,90%. A curva analítica para Mibi mostrou-se linear na faixa de 100-800 μgL-1, com r e r2 > 0,99, apresentando parâmetros similares aos das curvas analíticas de CONP. Os resultados obtidos neste estudo demonstraram que a abordagem do pior caso para validação de limpeza é um meio simples e eficaz para diminuir a complexidade e morosidade do processo de validação, além de proporcionar uma redução nos custos envolvidos nestas atividades. Todos os resultados obtidos nos ensaios de validação de método CONP atenderam as exigências e especificações preconizadas pela norma RE 899/2003 da ANVISA para considerar a metodologia validada. / Radiopharmaceuticals are defined as pharmaceutical preparations containing a radionuclide in their composition, mostly intravenously administered, and therefore compliance with the principles of Good Manufacturing Practices (GMP) is essential and indispensable. Cleaning validation is a requirement of the current GMP, and consists of documented evidence, which demonstrates that the cleaning procedures are able to remove residues to pre-determined acceptance levels, ensuring that no cross contamination occurs. A simplification of cleaning processes validation is accepted, and consists in choosing a product, called \"worst case\", to represent the cleaning processes of all equipment of the same production area. One of the steps of cleaning validation is the establishment and validation of the analytical method to quantify the residue. The aim of this study was to establish the worst case for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagent (LR) for labeling with 99mTc, evaluate the use of Total Organic Carbon (TOC) content as indicator of equipment cleaning used in the LR manufacture, validate the method of Non-Purgeable Organic Carbon (NPOC), and perform recovery tests with the product chosen as worst case.Worst case products choice was based on the calculation of an index called \"Worst Case Index\" (WCI), using information about drug solubility, difficulty of cleaning the equipment and occupancy rate of the products in line production. The products indicated as worst case was the LR MIBI-TEC. The method validation assays were performed using carbon analyser model TOC-Vwp coupled to an autosampler model ASI-V, both from Shimadzu&reg, controlled by TOC Control-V software. It was used the direct method for NPOC quantification. The parameters evaluated in the validation method were: system suitability, robustness, linearity, detection limit (DL) and quantification limit (QL), precision (repeatability and intermediate precision), and accuracy (recovery) and they were defined as follows: 4% acidifying reagent, 2.5 ml oxidizing reagent, 4.5 minutes integration curve time, 3 minutes sparge time and linearity in 40-1000 μgL-1 range, with correlation coefficient (r) and residual sum of minimum squares (r2) greater than 0.99 respectively. DL and QL for NPOC were 14.25 ppb e 47.52 ppb respectively, repeatability between 0.11 and 4.47%; the intermediate precision between 0.59 and 3.80% and accuracy between 97.05 and 102.90%. The analytical curve for Mibi was linear in 100-800 μgL-1 range with r and r2 greater than 0.99, presenting similar parameters to NPOC analytical curves. The results obtained in this study demonstrated that the worst-case approach to cleaning validation is a simple and effective way to reduce the complexity and slowness of the validation process, and provide a costs reduction involved in these activities. All results obtained in NPOC method validation assays met the requirements and specifications recommended by the RE 899/2003 Resolution from ANVISA to consider the method validated.
37

Low Complexity Space-Time coding for MIMO systems. / Codes Espace-Temps à Faible Complexité pour Systèmes MIMO

Ismail, Amr 24 November 2011 (has links)
Les dernières années ont témoigné une augmentation spectaculaire de la demande des communications sans-fil à taux élevé. Afin de répondre à ces nouvelles exigences, le recours aux techniques Multiple-Input Multiple-Output (MIMO) était inévitable, car ils sont capables d’assurer une transmission fiable des données à haut débit sans l’allocation de bande passante supplémentaire. Dans le cas où l’émetteur ne dispose pas d’information sur l’état du canal, les techniques de codage spatio-temporel se sont avérées d’exploiter efficacement les degrés de liberté du canal MIMO tout en profitant du gain de diversité maximal. D’autre part, généralement la complexité de décodage ML des codes espace-temps augmente de manière exponentielle avec le taux ce qui impose un défi important à leur incorporation dans les normes récentes de communications. Reconnaissant l’importance du critère de faible complexité dans la conception des codes espace-temps, nous nous concentrons dans cette thèse sur les codes espace-temps en bloc où la matrice du code peut être exprimée comme une combinaison linéaire des symboles réels transmis et nous proposons des nouveaux codes qui sont décodables avec une complexité inférieure à celle de leurs rivaux dans la littérature tout en fournissant des meilleurs performances ou des performances légèrement inférieures. / The last few years witnessed a dramatic increase in the demand on high-rate reliable wireless communications. In order to meet these new requirements, resorting to Multiple-Input Multiple-Output (MIMO) techniques was inevitable as they may offer high-rate reliable wireless communications without any additional bandwidth. In the case where the transmitter does not have any prior knowledge about the channel state information, space-time coding techniques have proved to efficiently exploit the MIMO channel degrees of freedom while taking advantage of the maximum diversity gain. On the other hand, the ML decoding complexity of Space-Time Codes (STCs) generally increases exponentially with the rate which imposes an important challenge to their incorporation in recent communications standards. Recognizing the importance of the low-complexity criterion in the STC design for practical considerations, this thesis focuses on the design of new low-complexity Space-Time Block Codes (STBCs) where the transmitted code matrix can be expressed as a weighted linear combination of information symbols and we propose new codes that are decoded with a lower complexity than that of their rivals in the literature while providing better or slightly lower performance.
38

Certified Compilation and Worst-Case Execution Time Estimation

Oliveira Maroneze, André 17 June 2014 (has links) (PDF)
Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
39

Optimization Techniques for Performance and Power Dissipation in Test and Validation

Jayaraman, Dheepakkumaran 01 May 2012 (has links)
The high cost of chip testing makes testability an important aspect of any chip design. Two important testability considerations are addressed namely, the power consumption and test quality. The power consumption during shift is reduced by efficiently adding control logic to the design. Test quality is studied by determining the sensitization characteristics of a path to be tested. The path delay fault models have been used for the purpose of studying this problem. Another important aspect in chip design is performance validation, which is increasingly perceived as the major bottleneck in integrated circuit design. Given the synthesizable HDL code, the proposed technique will efficiently identify infeasible paths, subsequently, it determines the worst case execution time (WCET) in the HDL code.
40

Worst-case delay analysis of real-time switched Ethernet networks with flow local synchronization / L’analyse pire cas de délai sur des réseaux Ethernet commuté temps réels avec la synchronisation locale de flux

Li, Xiaoting 19 September 2013 (has links)
Les réseaux Ethernet commuté full-duplex constituent des solutions intéressantes pour des applications industrielles. Mais le non-déterminisme d’un commutateur IEEE 802.1d, fait que l’analyse pire cas de délai de flux critiques est encore un problème ouvert. Plusieurs méthodes ont été proposées pour obtenir des bornes supérieures des délais de communication sur des réseaux Ethernet commuté full duplex temps réels, faisant l’hypothèse que le trafic en entrée du réseau peut être borné. Le problème principal reste le pessimisme introduit par la méthode de calcul de cette borne supérieure du délai. Ces méthodes considèrent que tous les flux transmis sur le réseau sont indépendants. Ce qui est vrai pour les flux émis par des nœuds sources différents car il n’existe pas, dans le cas général, d’horloge globale permettant de synchroniser les flux. Mais pour les flux émis par un même nœud source, il est possible de faire l’hypothèse d’une synchronisation locale de ces flux. Une telle hypothèse permet de bâtir un modèle plus précis des flux et en conséquence élimine des scénarios impossibles qui augmentent le pessimisme du calcul. Le sujet principal de cette thèse est d’étudier comment des flux périodiques synchronisés par des offsets peuvent être gérés dans le calcul des bornes supérieures des délais sur un réseau Ethernet commuté temps-réel. Dans un premier temps, il s’agit de présenter l’impact des contraintes d’offsets sur le calcul des bornes supérieures des délais de bout en bout. Il s’agit ensuite de présenter comment intégrer ces contraintes d’offsets dans les approches de calcul basées sur le Network Calculus et la méthode des Trajectoires. Une méthode Calcul Réseau modifiée et une méthode Trajectoires modifiée sont alors développées et les performances obtenues sont comparées. Le réseau avionique AFDX (Avionics Full-Duplex Switched Ethernet) est pris comme exemple d’un réseau Ethernet commuté full-duplex. Une configuration AFDX industrielle avec un millier de flux est présentée. Cette configuration industrielle est alors évaluée à l’aide des deux approches, selon un choix d’allocation d’offsets donné. De plus, différents algorithmes d’allocation des offsets sont testés sur cette configuration industrielle, pour trouver un algorithme d’allocation quasi-optimal. Une analyse de pessimisme des bornes supérieures calculées est alors proposée. Cette analyse est basée sur l’approche des trajectoires (rendue optimiste) qui permet de calculer une sous-approximation du délai pire-cas. La différence entre la borne supérieure du délai (calculée par une méthode donnée) et la sous-approximation du délai pire cas donne une borne supérieure du pessimisme de la méthode. Cette analyse fournit des résultats intéressants sur le pessimisme des approches Calcul Réseau et méthode des Trajectoires. La dernière partie de la thèse porte sur une architecture de réseau temps réel hétérogène obtenue par connexion de réseaux CAN via des ponts sur un réseau fédérateur de type Ethernet commuté. Deux approches, une basée sur les composants et l’autre sur les Trajectoires sont proposées pour permettre une analyse des délais pire-cas sur un tel réseau. La capacité de calcul d’une borne supérieure des délais pire-cas dans le contexte d’une architecture hétérogène est intéressante pour les domaines industriels. / Full-duplex switched Ethernet is a promising candidate for interconnecting real-time industrial applications. But due to IEEE 802.1d indeterminism, the worst-case delay analysis of critical flows supported by such a network is still an open problem. Several methods have been proposed for upper-bounding communication delays on a real-time switched Ethernet network, assuming that the incoming traffic can be upper bounded. The main problem remaining is to assess the tightness, i.e. the pessimism, of the method calculating this upper bound on the communication delay. These methods consider that all flows transmitted over the network are independent. This is true for flows emitted by different source nodes since, in general, there is no global clock synchronizing them. But the flows emitted by the same source node are local synchronized. Such an assumption helps to build a more precise flow model that eliminates some impossible communication scenarios which lead to a pessimistic delay upper bounds. The core of this thesis is to study how local periodic flows synchronized with offsets can be handled when computing delay upper-bounds on a real-time switched Ethernet. In a first step, the impact of these offsets on the delay upper-bound computation is illustrated. Then, the integration of offsets in the Network Calculus and the Trajectory approaches is introduced. Therefore, a modified Network Calculus approach and a modified Trajectory approach are developed whose performances are compared on an Avionics Full-DupleX switched Ethernet (AFDX) industrial configuration with one thousand of flows. It has been shown that, in the context of this AFDX configuration, the Trajectory approach leads to slightly tighter end-to-end delay upper bounds than the ones of the Network Calculus approach. But offsets of local flows have to be chosen. Different offset assignment algorithms are then investigated on the AFDX industrial configuration. A near-optimal assignment can be exhibited. Next, a pessimism analysis of the computed upper-bounds is proposed. This analysis is based on the Trajectory approach (made optimistic) which computes an under-estimation of the worst-case delay. The difference between the upper-bound (computed by a given method) and the under-estimation of the worst-case delay gives an upper-bound of the pessimism of the method. This analysis gives interesting comparison results on the Network Calculus and the Trajectory approaches pessimism. The last part of the thesis, deals with a real-time heterogeneous network architecture where CAN buses are interconnected through a switched Ethernet backbone using dedicated bridges. Two approaches, the component-based approach and the Trajectory approach, are developed to conduct a worst-case delay analysis for such a network. Clearly, the ability to compute end-to-end delays upper-bounds in the context of heterogeneous network architecture is promising for industrial domains.

Page generated in 0.0759 seconds