• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 246
  • 43
  • 28
  • 24
  • 7
  • 6
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 442
  • 232
  • 115
  • 104
  • 98
  • 82
  • 68
  • 64
  • 38
  • 38
  • 37
  • 35
  • 35
  • 35
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Transportation planning via location-based social networking data : exploring many-to-many connections

Cebelak, Meredith Kimberly 17 September 2015 (has links)
Today’s metropolitan areas see changes in populations and land development occurring at faster rates than transportation planning can be updated. This dissertation explores the use of a new dataset from the location-based social networking spectrum to analyze origin-destination travel demand within Austin, TX. A detailed exploration of the proposed data source is conducted to determine its overall capabilities with respect to the Austin area demographics. A new methodology is proposed for the creation of origin-destination matrices using a peer-to-peer modeling structure. This methodology is compared against a previously examined and more traditional approach, the doubly-constrained gravity model, to understand the capabilities of both models with various friction functions. Each method is examined within the constructs of the study area’s existing origin-destination matrix by examining the coincidence ratios, mean errors, mean absolute errors, frequency ratios, swap ratios, trip length distributions, zonal trip generation and attraction heat maps, and zonal origin-destination flow patterns. Through multiple measures, this dissertation provides initial interpretations of the robust Foursquare data collected for the Austin area. Based upon the data analytics performed, the Foursquare data source is shown to be capable of providing immensely detailed spatial-temporal data that can be utilized as a supplementary data source to traditional transportation planning data collection methods or in conjunction with other data sources, such as social networking platforms. The examination of the proposed peer-to-peer methodology presented within this dissertation provides a first look at the potential of many-to-many modeling for transportation planning. The peer-to-peer model was found to be superior to the doubly-constrained gravity model with respect to intrazonal trips. Furthermore, the peer-to-peer model was found to better estimate productions, attractions, and zone to zone movements when a linear function was used for long trips, and was computationally more proficient for all models examined.
102

Simulation fonctionnelle native pour des systèmes many-cœurs / Functional native simulation techniques for many-core systems

Sarrazin, Guillaume 23 May 2016 (has links)
Le nombre de transistors dans une puce augmente constamment en suivant la conjecture de Moore, qui dit que le nombre de transistors dans une puce double tous les 2 ans. On arrive donc aujourd’hui à des systèmes d’une telle complexité que l’exploration architecturale ou le développement, même parallèle, de la conception de la puce et du code applicatif prend trop de temps. Pour réduire ce temps, la solution généralement admise consiste à développer des plateformes virtuelles reproduisant le comportement de la puce cible. Avoir une haute vitesse de simulation est essentiel pour ces plateformes, notamment pour les systèmes many-cœurs à cause du grand nombre de cœurs à simuler. Nous nous focalisons donc dans cette thèse sur la simulation native, dont le principe est de compiler le code source directement pour l’architecture hôte, offrant ainsi un temps de simulation que l’on peut espérer optimal. Mais un certain nombre de caractéristiques fonctionnelles spécifiques au cœur cible peuvent ne pas être présentes sur le cœur hôte. L’utilisation de l’assistance matérielle à la virtualisation (HAV) comme base pour la simulation native vient renforcer la dépendance de la simulation du cœur cible par rapport aux caractéristiques du cœur hôte. Nous proposons dans ce contexte un moyen de simuler les caractéristiques fonctionnelles spécifiques du cœur cible en simulation native basée sur le HAV. Parmi les caractéristiques propres au cœur cible, l’unité de calcul à virgule flottante est un élément important, bien trop souvent négligé en simulation native conduisant certains calculs à donner des résultats différents entre le cœur cible et le cœur hôte. Nous nous restreignons au cas de la simulation compilée et nous proposons une méthodologie permettant de simuler correctement les opérations de calcul à virgule flottante. Finalement la simulation native pose des problèmes de passage à l’échelle. Des problèmes de découplage temporel amènent à simuler inutilement certaines instructions lors de procédures de synchronisation entre des tâches s’exécutant sur les cœurs cibles, conduisant à une réduction de la vitesse de simulation. Nous proposons des solutions pour permettre un meilleur passage à l’échelle de la simulation native. / The number of transistors in one chip is increasing following Moore’s conjecture which says that the number of transistors per chip doubles every two years. Current systems are so complex that chip design and specific software development for one chip take too much time even if software development is done in parallel with the design of the hardware architecture, often because of system integration issues. To help reducing this time, the general solution consists of using virtual platforms to reproduce the behavior of the target chip. The simulation speed of these platforms is a major issue, especially for many-core systems in which the number of programmable cores is really high. We focus in this thesis on native simulation. Its principle is to compile source code directly for the host architecture to allow very fast simulation, at the cost of requiring "equivalent" features on the target and host cores.However, some target core specific features can be missing in the host core. Hardware Assisted Virtualization (HAV) is used to ease native simulation but it reinforces the dependency of the target chip simulation regarding the host core capabilities. In this context, we propose a solution to simulate the target core functional specific features with HAV based native simulation.Among target core features, the floating point unit is an important element which is neglected in native simulation leading to potential functional differences between target and host computation results. We restrict our study to the compiled simulation technique and we propose a methodology ensuring to accurately simulate floating point computations while still keeping a good simulation speed.Finally, native simulation has a scalability issue. Time decoupling problems generate unnecessary code simulation during synchronisation protocols between threads executed on the target cores, leading to an important decrease of simulation speed when the number of cores grows. We address this problem and propose solutions to allow a better scalability for native simulation.
103

Uma abordagem escalável para controle de acesso muitos para muitos em redes centradas de informação

Silva, Rafael Hansen da January 2016 (has links)
Um dos principais desafios em Redes Centradas em Informação (ICN) é como prover controle de acesso à publicação e recuperação de conteúdos. Apesar das potencialidades, as soluções existentes, geralmente, consideram um único usuário agindo como publicador. Ao lidar com múltiplos publicadores, elas podem levar a uma explosão combinatória de chaves criptográficas. As soluções projetadas visando a múltiplos publicadores, por sua vez, dependem de arquiteturas de redes específicas e/ou de mudanças nessas para operar. Nesta dissertação é proposta uma solução, apoiada em criptografia baseada em atributos, para controle de acesso a conteúdos. Nessa solução, o modelo de segurança é voltado a grupos de compartilhamento seguro, nos quais todos os usuários membros podem publicar e consumir conteúdos. Diferente de trabalhos anteriores, a solução proposta mantém o número de chaves proporcional ao de membros nos grupos e pode ser empregada em qualquer arquitetura ICN de forma gradual. A proposta é avaliada quanto ao custo de operação, à quantidade de chaves necessárias e à eficiência na disseminação de conteúdos. Em comparação às soluções existentes, ela oferece maior flexibilidade no controle de acesso, sem aumentar a complexidade do gerenciamento de chaves e sem causar sobrecustos significativos à rede. / One of the main challenges in Information-Centric Networking (ICN) is providing access control to content publication and retrieval. In spite of the potentialities, existing solutions often consider a single user acting as publisher. When dealing with multiple publishers, they may lead to a combinatorial explosion of cryptographic keys. Those solutions that focus on multiple publishers, on the other hand, rely on specific network architectures and/or changes to operate. In this dissertation, it is proposed a solution, supported by attribute-based encryption, for content access control. In this solution, the security model is focused on secure content distribution groups, in which any member user can publish to and retrieve from. Unlike previous work, the proposed solution keeps the number of cryptographic keys proportional to the number of group members, and may even be adopted gradually in any ICN architecture. The proposed solution is evaluated with respect to the overhead it imposes, number of required keys, and efficiency in the content dissemination. In contrast to existing solutions, it offers higher access control flexibility, without increasing key management process complexity and without causing significant network overhead.
104

Uma abordagem escalável para controle de acesso muitos para muitos em redes centradas de informação

Silva, Rafael Hansen da January 2016 (has links)
Um dos principais desafios em Redes Centradas em Informação (ICN) é como prover controle de acesso à publicação e recuperação de conteúdos. Apesar das potencialidades, as soluções existentes, geralmente, consideram um único usuário agindo como publicador. Ao lidar com múltiplos publicadores, elas podem levar a uma explosão combinatória de chaves criptográficas. As soluções projetadas visando a múltiplos publicadores, por sua vez, dependem de arquiteturas de redes específicas e/ou de mudanças nessas para operar. Nesta dissertação é proposta uma solução, apoiada em criptografia baseada em atributos, para controle de acesso a conteúdos. Nessa solução, o modelo de segurança é voltado a grupos de compartilhamento seguro, nos quais todos os usuários membros podem publicar e consumir conteúdos. Diferente de trabalhos anteriores, a solução proposta mantém o número de chaves proporcional ao de membros nos grupos e pode ser empregada em qualquer arquitetura ICN de forma gradual. A proposta é avaliada quanto ao custo de operação, à quantidade de chaves necessárias e à eficiência na disseminação de conteúdos. Em comparação às soluções existentes, ela oferece maior flexibilidade no controle de acesso, sem aumentar a complexidade do gerenciamento de chaves e sem causar sobrecustos significativos à rede. / One of the main challenges in Information-Centric Networking (ICN) is providing access control to content publication and retrieval. In spite of the potentialities, existing solutions often consider a single user acting as publisher. When dealing with multiple publishers, they may lead to a combinatorial explosion of cryptographic keys. Those solutions that focus on multiple publishers, on the other hand, rely on specific network architectures and/or changes to operate. In this dissertation, it is proposed a solution, supported by attribute-based encryption, for content access control. In this solution, the security model is focused on secure content distribution groups, in which any member user can publish to and retrieve from. Unlike previous work, the proposed solution keeps the number of cryptographic keys proportional to the number of group members, and may even be adopted gradually in any ICN architecture. The proposed solution is evaluated with respect to the overhead it imposes, number of required keys, and efficiency in the content dissemination. In contrast to existing solutions, it offers higher access control flexibility, without increasing key management process complexity and without causing significant network overhead.
105

Dynamic optimization of data-flow task-parallel applications for large-scale NUMA systems / Optimisation dynamique des applications à base de tâches data-flow pour des machines NUMA

Drebes, Andi 25 June 2015 (has links)
Au milieu des années deux mille, le développement de microprocesseurs a atteint un point à partir duquel l'augmentation de la fréquence de fonctionnement et la complexification des micro-architectures devenaient moins efficaces en termes de consommation d'énergie, poussant ainsi la densité d'énergie au delà du raisonnable. Par conséquent, l'industrie a opté pour des architectures multi-cœurs intégrant plusieurs unités de calcul sur une même puce. Les sytèmes hautes performances d'aujourd'hui sont composés de centaines de cœurs et les systèmes futurs intègreront des milliers d'unités de calcul. Afin de fournir une bande passante mémoire suffisante dans ces systèmes, la mémoire vive est distribuée physiquement sur plusieurs contrôleurs mémoire avec un accès non-uniforme à la mémoire (NUMA). Des travaux de recherche récents ont identifié les modèles de programmation à base de tâches dépendantes à granularité fine comme une approche clé pour exploiter la puissance de calcul des architectures généralistes massivement parallèles. Toutefois, peu de recherches ont été conduites sur l'optimisation dynamique des programmes parallèles à base de tâches afin de réduire l'impact négatif sur les performances résultant de la non-uniformité des accès à la mémoire. L'objectif de cette thèse est de déterminer les enjeux et les opportunités concernant l'exploitation efficace de machines many-core NUMA par des applications à base de tâches et de proposer des mécanismes efficaces, portables et entièrement automatiques pour le placement de tâches et de données, améliorant la localité des accès à la mémoire ainsi que les performances. Les décisions de placement sont basées sur l'exploitation des informations sur les dépendances entre tâches disponibles dans les run-times de langages de programmation à base de tâches modernes. Les évaluations expérimentales réalisées reposent sur notre implémentation dans le run-time du langage OpenStream et un ensemble de benchmarks scientifiques hautes performances. Enfin, nous avons développé et implémenté Aftermath, un outil d'analyse et de débogage de performances pour des applications à base de tâches et leurs run-times. / Within the last decade, microprocessor development reached a point at which higher clock rates and more complex micro-architectures became less energy-efficient, such that power consumption and energy density were pushed beyond reasonable limits. As a consequence, the industry has shifted to more energy efficient multi-core designs, integrating multiple processing units (cores) on a single chip. The number of cores is expected to grow exponentially and future systems are expected to integrate thousands of processing units. In order to provide sufficient memory bandwidth in these systems, main memory is physically distributed over multiple memory controllers with non-uniform access to memory (NUMA). Past research has identified programming models based on fine-grained, dependent tasks as a key technique to unleash the parallel processing power of massively parallel general-purpose computing architectures. However, the execution of task-paralel programs on architectures with non-uniform memory access and the dynamic optimizations to mitigate NUMA effects have received only little interest. In this thesis, we explore the main factors on performance and data locality of task-parallel programs and propose a set of transparent, portable and fully automatic on-line mapping mechanisms for tasks to cores and data to memory controllers in order to improve data locality and performance. Placement decisions are based on information about point-to-point data dependences, readily available in the run-time systems of modern task-parallel programming frameworks. The experimental evaluation of these techniques is conducted on our implementation in the run-time of the OpenStream language and a set of high-performance scientific benchmarks. Finally, we designed and implemented Aftermath, a tool for performance analysis and debugging of task-parallel applications and run-times.
106

Introduction de mécanismes de tolérance aux pannes franches dans les architectures de processeur « many-core » à mémoire partagée cohérente / Introduction of Fault-Tolerance Mechanisms for Permanent Failures in Coherent Shared-Memory Many-Core Architectures

Fuguet Tortolero, César 25 November 2015 (has links)
L'augmentation continue de la puissance de calcul requise par les applications telles que la cryptographie, la simulation, ou le traitement du signal a fait évoluer la structure interne des processeurs vers des architectures massivement parallèles (dites « many-core »). Ces architectures peuvent contenir des centaines, voire des milliers de cœurs afin de fournir une puissance de calcul importante avec une consommation énergétique raisonnable. Néanmoins, l'importante densité de transistors fait que ces architectures sont très susceptibles aux pannes matérielles. L'augmentation dans la variabilité du processus de fabrication, et dans les facteurs de stress des transistors, dégrade à la fois le rendement de fabrication, et leur durée de vie. Nous proposons donc un mécanisme complet de tolérance aux pannes franches, permettant les architectures « many-core » à mémoire partagée cohérente de fonctionner dans un mode dégradé. Ce mécanisme s'appuie sur un logiciel embarqué et distribué dans des mémoires sur puce (« firmware »), qui est exécuté par les cœurs à chaque démarrage du processeur. Ce logiciel implémente plusieurs algorithmes distribués permettant de localiser les composants défaillants (cœurs, bancs mémoires, et routeurs des réseaux sur puce), de reconfigurer l'architecture matérielle, et de fournir une cartographie de l'infrastructure matérielle fonctionnelle au système d'exploitation. Le mécanisme supporte aussi bien des défauts de fabrication, que des pannes de vieillissement après que la puce est en service dans l'équipement. Notre proposition est évaluée en utilisant un prototype virtuel précis au cycle d'une architecture « many-core » existante. / The always increasing performance demands of applications such as cryptography, scientific simulation, network packets dispatching, signal processing or even general-purpose computing has made of many-core architectures a necessary trend in the processor design. These architectures can have hundreds or thousands of processor cores, so as to provide important computational throughputs with a reasonable power consumption. However, their important transistor density makes many-core architectures more prone to hardware failures. There is an augmentation in the fabrication process variability, and in the stress factors of transistors, which impacts both the manufacturing yield and lifetime. A potential solution to this problem is the introduction of fault-tolerance mechanisms allowing the processor to function in a degraded mode despite the presence of defective internal components. We propose a complete in-the-field reconfiguration-based permanent failure recovery mechanism for shared-memory many-core processors. This mechanism is based on a firmware (stored in distributed on-chip read-only memories) executed at each hardware reset by the internal processor cores without any external intervention. It consists in distributed software procedures, which locate the faulty components (cores, memory banks, and network-on-chip routers), reconfigure the hardware architecture, and provide a description of the functional hardware infrastructure to the operating system. Our proposal is evaluated using a cycle-accurate SystemC virtual prototype of an existing many-core architecture. We evaluate both its latency, and its silicon cost.
107

One To Mant And Many To Many Collective Communication Operations On Grids

Gupta, Rakhi 12 1900 (has links)
Collective Communication Operations are widely used in MPI applications and play an important role in their performance. Hence, various projects have focused on optimization of collective communications for various kinds of parallel computing environments including LAN settings, heterogeneous networks and most recently Grid systems. The distinguishing factor of Grids from all the other environments is heterogeneity of hosts and network, and dynamically changing resource characteristics including load and availability. The first part of the thesis develops a solution for MPI broadcast (one-to-many) on Grids. Some current strategies take into consideration static information about network topology for determining an efficient broadcast tree for Grids. Some other strategies take into account only transient network characteristics. We combined both these strategies and cluster the network dynamically on the basis of link bandwidths. Given a set of network parameters we use Simulated Annealing (SA) to obtain the best schedule. Also, we can time tune individual. SAs, to adapt the solution finding process, on the basis of estimated available times before next broadcast invocations in the application. We also developed software architecture for updation of schedules. We compared our algorithm with the earlier approaches under loaded network conditions, and obtained average performance improvement of 20%. The second part of the thesis extends the work for MPI all gather (many-to-many) operation. Current popular techniques consider strict hierarchical schemes for this operation, wherein from each cluster a representative (or coordinator) node is chosen, and inter cluster communication is done through these representative nodes. This is non optimal as inter cluster communication is usually on high capacity links that can sustain more than one transfer with the same through- put. We developed a cluster based and incremental heuristic algorithm for allgather on Grids. We compared the time taken by allgather schedules determined by this algorithm with current popular implementations. We also compared our algorithm with a strategy where allgather is constructed from a set of broadcast trees. We obtained average performance improvement of 67% over existing strategies.
108

Exécution efficace de systèmes Multi-Agents sur GPU / Efficient execution of multi-agent systems on GPU

Laville, Guillaume 27 June 2014 (has links)
Ces dernières années ont consacré l’émergence du parallélisme dans la plupart des branches de l’informatique.Au niveau matériel, tout d’abord, du fait de la stagnation des fréquences de fonctionnement des unités decalcul. Au niveau logiciel, ensuite, avec la popularisation de nombreuses plates-formes d’exécution parallèle.Une forme de parallélisme est également présente dans les systèmes multi-agents, qui facilitent la description desystèmes complexes comme ensemble d’entités en interaction. Si l’adéquation entre ce parallélisme d’exécutionlogiciel et conceptuel semble naturelle, la parallélisation reste une démarche difficile, du fait des nombreusesadaptations devant être effectuées et des dépendances présentes explicitement dans de très nombreux systèmesmulti-agents.Dans cette thèse, nous proposons une solution pour faciliter l’implémentation de ces modèles sur une plateformed’exécution parallèle telle que le GPU. Notre bibliothèque MCMAS vient répondre à cette problématiqueau moyen de deux interfaces de programmation, une couche de bas niveau MCM permettant l’accès direct àOpenCL et un ensemble de plugins utilisables sans connaissances GPU. Nous étudions ensuite l’utilisation decette bibliothèque sur trois systèmes multi-agents existants : le modèle proie-prédateur, le modèle MIOR etle modèle Collemboles. Pour montrer l’intérêt de cette approche, nous présentons une étude de performancede chacun de ces modèles et une analyse des facteurs contribuant à une exécution efficace sur GPU. Nousdressons enfin un bilan du travail et des réflexions présentées dans notre mémoire, avant d’évoquer quelquespistes d’amélioration possibles de notre solution. / These last years have seen the emergence of parallelism in many fields of computer science. This is explainedby the stagnation of the frequency of execution units at the hardware level and by the increasing usage ofparallel platforms at the software level. A form of parallelism is present in multi-agent systems, that facilitatethe description of complex systems as a collection of interacting entities. If the similarity between this softwareand this logical parallelism seems obvious, the parallelization process remains difficult in this case because ofthe numerous dependencies encountered in many multi-agent systems.In this thesis, we propose a common solution to facilitate the adaptation of these models on a parallel platformsuch as GPUs. Our library, MCMAS, provides access to two programming interface to facilitate this adaptation:a low-level layer providing direct access to OpenCL, MCM, and a high-level set of plugins not requiring anyGPU-related knowledge.We study the usage of this library on three existing multi-agent models : predator-prey,MIOR and Collembola. To prove the interest of the approach we present a performance study for each modeland an analysis of the various factors contributing to an efficient execution on GPUs. We finally conclude on aoverview of the work and results presented in the report and suggest future directions to enhance our solution.
109

Application des architectures many core dans les systèmes embarqués temps réel / Implementing a Real-time Avionic application on a Many-core Processor

Lo, Moustapha 22 February 2019 (has links)
Les processeurs mono-coeurs traditionnels ne sont plus suffisants pour répondre aux besoins croissants en performance des fonctions avioniques. Les processeurs multi/many-coeurs ont emergé ces dernières années afin de pouvoir intégrer plusieurs fonctions et de bénéficier de la puissance par Watt disponible grâce aux partages de ressources. En revanche, tous les processeurs multi/many-coeurs ne répondent pas forcément aux besoins des fonctions avioniques. Nous préférons avoir plus de déterminisme que de puissance de calcul car la certification de ces processeurs passe par la maîtrise du déterminisme. L’objectif de cette thèse est d’évaluer le processeur many-coeur (MPPA-256) de Kalray dans un contexte industriel aéronautique. Nous avons choisi la fonction de maintenance HMS (Health Monitoring System) qui a un besoin important en bande passante et un besoin de temps de réponse borné.Par ailleurs, cette fonction est également dotée de propriétés de parallélisme car elle traite des données de vibration venant de capteurs qui sont fonctionnellement indépendants, et par conséquent leur traitement peut être parallélisé sur plusieurs coeurs. La particularité de cette étude est qu’elle s’intéresse au déploiement d’une fonction existante séquentielle sur une architecture many-coeurs en partant de l’acquisition des données jusqu’aux calculs des indicateurs de santé avec un fort accent sur le fluxd’entrées/sorties des données. Nos travaux de recherche ont conduit à 5 contributions:• Transformation des algorithmes existants en algorithmes incrémentaux capables de traiter les données au fur et mesure qu’elles arrivent des capteurs.• Gestion du flux d’entrées des échantillons de vibrations jusqu’aux calculs des indicateurs de santé,la disponibilité des données dans le cluster interne, le moment où elles sont consommées et enfinl’estimation de la charge de calcul.• Mesures de temps pas très intrusives directement sur le MPPA-256 en ajoutant des timestamps dans le flow de données.• Architecture logicielle qui respecte les contraintes temps-réel même dans les pires cas. Elle estbasée sur une pipeline à 3 étages.• Illustration des limites de la fonction existante: nos expériences ont montré que les paramètres contextuels de l’hélicoptère tels que la vitesse du rotor doivent être corrélés aux indicateurs de santé pour réduire les fausses alertes. / Traditional single-cores are no longer sufficient to meet the growing needs of performance in avionics domain. Multi-core and many-core processors have emerged in the recent years in order to integrate several functions thanks to the resource sharing. In contrast, all multi-core and many-core processorsdo not necessarily satisfy the avionic constraints. We prefer to have more determinism than computing power because the certification of such processors depends on mastering the determinism.The aim of this thesis is to evaluate the many-core processor (MPPA-256) from Kalray in avionic context. We choose the maintenance function HMS (Health Monitoring System) which requires an important bandwidth and a response time guarantee. In addition, this function has also parallelism properties. It computes data from sensors that are functionally independent and, therefore their processing can be parallelized in several cores. This study focuses on deploying the existing sequential HMS on a many-core processor from the data acquisition to the computation of the health indicators with a strongemphasis on the input flow.Our research led to five main contributions:• Transformation of the global existing algorithms into a real-time ones which can process data as soon as they are available.• Management of the input flow of vibration samples from the sensors to the computation of the health indicators, the availability of raw vibration data in the internal cluster, when they are consumed and finally the workload estimation.• Implementing a lightweight Timing measurements directly on the MPPA-256 by adding timestamps in the data flow.• Software architecture that respects real-time constraints even in the worst cases. The software architecture is based on three pipeline stages.• Illustration of the limits of the existing function: our experiments have shown that the contextual parameters of the helicopter such as the rotor speed must be correlated with the health indicators to reduce false alarms.
110

Scheduling of certifiable mixed-criticality systems / Ordonnancement des systèmes certifiés avec différents niveaux de criticité

Socci, Dario 09 March 2016 (has links)
Les systèmes temps-réels modernes ont tendance à obtenir la criticité mixte, dans le sens où ils intègrent sur une même plateforme de calcul plusieurs applications avec différents niveaux de criticités. D'un côté, cette intégration permet de réduire le coût, le poids et la consommation d'énergie. Ces exigences sont importantes pour des systèmes modernes comme par exemple les drones (UAV). De l'autre, elle conduit à des complications majeures lors de leur conception. Ces systèmes doivent être certifiés en prenant en compte ces différents niveaux de criticités. L'ordonnancement temps réel des systèmes avec différents niveaux de criticités est connu comme étant l’un des plus grand défi dans le domaine. Les techniques traditionnelles nécessitent une isolation complète entre les niveaux de criticité ou bien une certification globale au plus haut niveau. Une telle solution conduit à un gaspillage des ressources, et à la perte de l’avantage de cette intégration. Ce problème a suscité une nouvelle vague de recherche dans la communauté du temps réel, et de nombreuses solutions ont été proposées. Parmi elles, l'une des méthodes la plus utilisée pour ordonnancer de tels systèmes est celle d'Audsley. Malheureusement, elle a un certain nombre de limitations, dont nous parlerons dans cette thèse. Ces limitations sont encore beaucoup plus accentuées dans le cas de l'ordonnancement multiprocesseur. Dans ce cas précis, l'ordonnancement basé sur la priorité perd des propriétés importantes. C’est la raison pour laquelle, les algorithmes d'ordonnancement avec différents niveaux de criticités pour des architectures multiprocesseurs ne sont que très peu étudiés et ceux qu’on trouve dans la littérature sont généralement construits sur des hypothèses restrictives. Cela est particulièrement problématique car les systèmes industriels temps réel cherchent à migrer vers plates-formes multi-cœurs. Dans ce travail nous proposons une approche différente pour résoudre ces problèmes. / Modern real-time systems tend to be mixed-critical, in the sense that they integrate on the same computational platform applications at different levels of criticality. Integration gives the advantages of reduced cost, weight and power consumption, which can be crucial for modern applications like Unmanned Aerial Vehicles (UAVs). On the other hand, this leads to major complications in system design. Moreover, such systems are subject to certification, and different criticality levels needs to be certified at different level of assurance. Among other aspects, the real-time scheduling of certifiable mixed critical systems has been recognized to be a challenging problem. Traditional techniques require complete isolation between criticality levels or global certification to the highest level of assurance, which leads to resource waste, thus loosing the advantage of integration. This led to a novel wave of research in the real-time community, and many solutions were proposed. Among those, one of the most popular methods used to schedule such systems is Audsley approach. However this method has some limitations, which we discuss in this thesis. These limitations are more pronounced in the case of multiprocessor scheduling. In this case priority-based scheduling looses some important properties. For this reason scheduling algorithms for multiprocessor mixed-critical systems are not as numerous in literature as the single processor ones, and usually are built on restrictive assumptions. This is particularly problematic since industrial real-time systems strive to migrate from single-core to multi-core and many-core platforms. Therefore we motivate and study a different approach that can overcome these problems.A restriction of practical usability of many mixed-critical and multiprocessor scheduling algorithms is assumption that jobs are independent. In reality they often have precedence constraints. In the thesis we show the mixed-critical variant of the problem formulation and extend the system load metrics to the case of precedence-constraint task graphs. We also show that our proposed methodology and scheduling algorithm MCPI can be extended to the case of dependent jobs without major modification and showing similar performance with respect to the independent jobs case. Another topic we treated in this thesis is time-triggered scheduling. This class of schedulers is important because they considerably reduce the uncertainty of job execution intervals thus simplifying the safety-critical system certification. They also simplify any auxiliary timing-based analyses that may be required to validate important extra-functional properties in embedded systems, such as interference on shared buses and caches, peak power dissipation, electromagnetic interference etc..The trivial method of obtaining a time-triggered schedule is simulation of the worst-case scenario in event-triggered algorithm. However, when applied directly, this method is not efficient for mixed-critical systems, as instead of one worst-case scenario they have multiple corner-case scenarios. For this reason, it was proposed in the literature to treat all scenarios into just a few tables, one per criticality mode. We call this scheduling approach Single Time Table per Mode (STTM) and propose a contribution in this context. In fact we introduce a method that transforms practically any scheduling algorithm into an STTM one. It works optimally on single core and shows good experimental results for multi-cores.Finally we studied the problem of the practical realization of mixed critical systems. Our effort in this direction is a design flow that we propose for multicore mixed critical systems. In this design flow, as the model of computation we propose a network of deterministic multi-periodic synchronous processes. Our approach is demonstrated using a publicly available toolset, an industrial application use case and a multi-core platform.

Page generated in 0.062 seconds