• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 150
  • 36
  • 13
  • 13
  • 6
  • 6
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 275
  • 87
  • 50
  • 46
  • 44
  • 44
  • 42
  • 42
  • 33
  • 31
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Modelagem fuzzy funcional evolutiva participativa / Evolving participatory learning fuzzy modeling

Lima, Elton Mario de 07 April 2008 (has links)
Orientadores: Fernando Antonio Campos Gomide, Rosangela Ballini / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-12T14:32:10Z (GMT). No. of bitstreams: 1 Lima_EltonMariode_M.pdf: 1259231 bytes, checksum: 7a910e84bfb43d6c13b2deb8b6f511c2 (MD5) Previous issue date: 2008 / Resumo: Este trabalho propõe um modelo fuzzy funcional evolutivo que utiliza uma aplicação do aprendizado participativo para a construção de uma base de regras. O aprendizado participativo é um modelo de aprendizado baseado na noção de compatibilidade para a atualização do conhecimento do sistema. O aprendizado participativo pode ser traduzido em um algoritmo de agrupamento não supervisionado conhecido como agrupamento participativo. O algoritmo intitulado Aprendizado Participativo Evolutivo é proposto para construir um modelo fuzzy funcional evolutivo no qual as regras são obtidas a partir de um algoritmo de agrupamento não supervisionado. O algoritmo utiliza uma versão do agrupamento participativo para a determinação de uma base de regras correspondente ao modelo funcional do tipo Takagi-Sugeno evolutivo. A partir de uma noção generalizada, o modelo proposto é aplicado em problemas de previsão de séries temporais e os resultados são obtidos para a conhecida série Box-Jenkis, além da previsão de uma série de carga horária de energia elétrica. Os resultados são comparados com o modelo Takagi-Sugeno evolutivo que utiliza a noção de função potencial para agrupar os dados dinâmicamente e com duas abordagens baseadas em redes neurais. Os resultados mostram que o modelo proposto é eficiente e parcimonioso, abrindo potencial para aplicações e estudos futuros. / Abstract: This work introduces an approach to develop evolving fuzzy rule-based models using participatory learning. Participatory learning assumes that learning and beliefs about a system depend on what the learning mechanism knows about the system itself. Participatory learning naturally augments clustering and yields an e_ective unsupervised fuzzy clustering algorithms for on-line, real time domains and applications. Clustering is an essential step to construct evolving fuzzy models and plays a key role in modeling performance and model quality. A least squares recursive approach to estimate the consequent parameters of the fuzzy rules for on-line modeling is emphasized. Experiments with the classic Box-Jenkins benchmark are conducted to compare the performance of the evolving participatory learning with the evolving fuzzy system modeling approach and alternative fuzzy modeling and neural methods. The experiments show the e_ciency of evolving participatory learning to handle the benchmark problem. The evolving participatory learning method is also used to forecast the average hourly load of an electric generation plant and compared against the evolving fuzzy system modeling using actual data. The results confirm the potential of the evolving fuzzy participatory method to solve real world modeling problems. / Mestrado / Automação Industrial / Mestre em Engenharia Elétrica
212

Modelagem de um relé de proteção diferencial de transformador no RTDS / Modeling a transformer differential protection relay in the RTDS

Magrin, Fabiano Gustavo Silveira, 1978- 25 August 2018 (has links)
Orientador: Maria Cristina Dias Tavares / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-25T04:21:56Z (GMT). No. of bitstreams: 1 Magrin_FabianoGustavoSilveira_M.pdf: 5471875 bytes, checksum: 204630879d78c9f5167f324d6d0f5fe9 (MD5) Previous issue date: 2014 / Resumo: Devido à dificuldade de execução de testes em sistemas reais os engenheiros procuram ferramentas e modelos para simular ou emular os sistemas reais em laboratório. Nesse sentido, o objetivo deste projeto de pesquisa foi desenvolver um modelo do relé de proteção diferencial de transformador no simulador digital em tempo real RTDS, baseando-se no relé SEL-787 da Schweitzer Engineering Laboratories Inc. O objetivo de criar um relé específico e já existente no mercado, saindo dos modelos genéricos, é devido à necessidade dos estudos em laboratório apresentarem resultados concretos e que representem o sistema real, desta forma apresentará grande valia para as futuras expansões do Sistema Elétrico Nacional ou Internacional. Após o desenvolvimento do modelo matemático do relé, foram criadas rotinas de teste exclusivas para análise do modelo e este foi testado em conjunto com um relé SEL-787 de forma a permitir uma comparação dos resultados. Para a realização deste trabalho foram estudadas e analisadas as situações em que um relé de proteção diferencial de transformador enfrenta no campo, como energização de transformador, saturação, faltas externas, faltas externas com saturação de TC, faltas internas a seção diferencial e externa ao transformador e faltas internas ao transformador como falta espira-terra e entre espiras. Os mesmos testes foram aplicados ao modelo de relé diferencial já existente na biblioteca do RTDS com o intuito de verificar se realmente modelos genéricos apresentam resultados diferentes de modelos específicos. A contribuição da pesquisa foi o desenvolvimento pioneiro do modelo do relé de proteção diferencial no ambiente RTDS / Abstract: Due to the difficulties involving tests in real systems engineers continuously look for tools and models to simulate or emulate real systems inside laboratories. With this focus, this job had the objective of developing a transformer differential protection relay in Real Time Digital Simulator, RTDS, based on SEL-787 relay, manufactured by Schweitzer Engineering Laboratories Inc. The objective of creating a specific relay already in the market, and not a generic model relay, is due to the necessity of the laboratories studies give real results and also represent the real system. This actual representation gives the engineers a more concrete data to support future expansions of the national and international electric systems. After the development of the relay model itself, it was tested in conjunction with a real SEL-787 allowing comparison of the results. For the accomplishment of this job many different situations which interfere in the daily operation of the relay in the field such as inrush, saturation, external faults, external faults with current transformer saturation, internal fault to the differential section but external to the transformer and internal faults like turn-to-turn faults and ground faults were studied and analyzed. The same tests were applied to a differential relay model already in the RTDS library with the purpose to verify whether generic models have different results compared to specific models. This research formerly presents a transformer differential relay model of a commercial relay for RTDS library / Mestrado / Energia Eletrica / Mestre em Engenharia Elétrica
213

Contributions à la génération de tests à partir d'automates à pile temporisés / Contributiions to test generation from timed pushdowm automata

M'Hemdi, Hana 23 September 2016 (has links)
La vérification et la validation des composants logiciels des systèmes temps réel est un des enjeuxmajeurs pour le développement de systèmes automatisés. Les modèles de tels systèmes doiventêtre vérifiés, et la conformité de leur implémentation par rapport à leur modèle doit être validée. Nous nous plaçons dans le cadre des systèmes récursifs temps réels modélisables par des automates à pile temporisés avec deadlines (TPAIO). Les deadlines imposent des conditions de progression du temps. L’objectif de cette thèse est de proposer des méthodes de génération de tests pour les TPAIO.Nos contributions sont les suivantes. Premièrement, une relation de conformité pour les TPAIO est introduite. Deuxièmement, une méthode polynomiale de génération de tests à partir d’un TPAIO déterministe avec deadline lazy est définie. Elle consiste à définir un algorithme de calcul d’un automate temporisé d’accessibilité incomplet en respectant les contraintes de pile. Cette méthode est incomplète. L’incomplétude n'étant pas un problème car l’activité de test est par essence incomplète. Troisièmement, nous définissons une méthode générant des cas de tests à partir d’un TPAIO déterministe avec sorties seulement et deadline delayable seulement. Elle d’applique aux abstractions de programmes récursifs temporisés. Elle consiste à générer des cas de tests en calculant un testeur sur-approximé. Finalement, nous avons proposé une généralisation du processus de génération de tests à partir d’un TPAIO général avec entrées/sorties et avec deadlines quelconques. La capacité de cette dernière méthode à détecter des implémentations non conformes est évaluée par une technique de mutation. / The verification and validation of software components for real-time systems is a major challenge for the development of automated systems. The models of such systems must be verified and the conformance of their implementation w.r.t their model must be validated. Our framework is that of real-time recursive systems modelled by timed pushdown automata with deadlines (TPAIO). The deadlines impose time progress conditions. The objective of this thesis is to propose test generation methods from TPAIO.Our contributions are as follows. Firstly, a conformance relation for TPAIO is introduced. Secondly, a polynomial method of test generation from a deterministic TPAIO with only lazy deadlines is defined. It consists of defining a polynomial algorithm that computes a partial reachability timed automaton by removing the stack constraints. This method is incomplete. The incompleteness is not a problem because software testing is an incomplete activity by nature. Thirdly, we define a method for generating test cases from a deterministic TPAIO with only outputs and delayable deadlines. It applies to the abstractions of timed recursive programs. It consists of generating test cases by computing an over-approximated tester. Finally, we propose a generalization of the test generation process from a non deterministic TPAIO with any deadlines. Its ability to detect non conform implementation is assessed by a mutation technique.
214

Aspect Analyzer: Ett verktyg för automatiserad exekveringstidsanalys av komponenter och aspekter / Aspect Analyzer: A Tool for Automated WCET Analysis of Aspects and Components

Uhlin, Pernilla January 2002 (has links)
The increasing complexity in the development of a configurable real-time system has emerged new principles of software techniques, such as aspect-oriented software development and component-based software development. These techniques allow encapsulation of the system's crosscutting concerns and increase the modularity of the software. The properties of a component that influences the systems performance or semantics are specified separately in entities called aspects, while basic functionality of the property still remains in the component. When building a real-time system, different sets of configurations of aspects and components can be combined, resulting in different configurations of the system. The temporal behavior of the system changes and a way to ensure the predictability of the system is needed. This thesis presents a tool for aspect-level worst-case execution time analysis, which gives a priori information about the temporal behavior of the system, before the process of composing aspects with components.
215

Verifikation av verktyget aspect analyzer / Aspect analyzer tool verification

Bodin, Joakim January 2003 (has links)
Rising complexity in the development of real-time systems has made it crucial to have reusable components and a more flexible way of configuring these components into a coherent system. Aspect-oriented system development (AOSD) is a technique that allows one to put a system’s crosscutting concerns into"modules"that are called aspects. Applying AOSD in real-time and embedded system development one can expect reductions in the complexity of the system design and development. A problem with AOSD in its current form is that it does not support predictability in the time domain. Hence, in order to use AOSD in real-time system development, we need to provide ways of analyzing temporal behavior of aspects, components and resulting system (made from weaving aspects and components). Aspect analyzer is a tool that computes the worst-case execution time (WCET) for a set of components and aspects, thus, enabling support for predictability in the time domain of aspect-oriented real-time software. A limitation of the aspect analyzer, until now, were that no verification had been made whether the aspect analyzer would produce WCET values that were close to the measured or computed (with another WCET analysis technique) WCET of an aspect-oriented real-time system. Therefore, in this thesis we perform a verification of the correctness of the aspect analyzer using a number of different methods for WCET analysis. These investigations of the correctness of the output from the aspect analyzer gave confidence to the automated WCET analysis. In addition, performing this verification led to the identification of the steps necessary to compute the WCETs of a piece of program, when using a third party tool, which gives the ability to write accurate input files for the aspect analyzer.
216

Verification techniques in the context of event-trigged soft real-time systems / Verifikationstekniker för event-triggade mjuka realtidssystem

Norberg, Johan January 2007 (has links)
When exploring a verification approach for Komatsu Forest's control system regarding their forest machines (Valmet), the context of soft real-time systems is illuminated. Because of the nature of such context, the verification process is based on empirical corroboration of requirements fulfillment rather than being a formal proving process. After analysis of the literature with respect to the software testing field, two paradigms have been defined in order to highlight important concepts for soft real-time systems. The paradigms are based on an abstract stimuli/response model, which conceptualize a system with inputs and output. Since the system is perceived as a black box, its internal details are hidden and thus focus is placed on a more abstract level. The first paradigm, the “input data paradigm”, is concerned about what data to input to the system. The second paradigm, the “input data mechanism paradigm” is concerned about how the data is sent, i.e. the actual input mechanism is focused. By specifying different dimensions associated with each paradigm, it is possible to define their unique characteristics. The advantage of this kind of theoretical construction is that each paradigm creates an unique sub-field with its own problems and techniques. The problems defined for this thesis is primarily focused on the input data mechanism paradigm, where devised dimensions are applied. New verification techniques are deduced and analyzed based on general software testing principles. Based on the constructed theory, a test system architecture for the control system is developed. Finally, an implementation is constructed based on the architecture and a practical scenario. Its automation capability is then assessed. The practical context for the thesis is a new simulator under development. It is based upon LabVIEW and PXI technology and handles over 200 I/O. Real machine components are connected to the environment, together with artificial components that simulate the engine, hydraulic systems and a forest. Additionally, physical control sticks and buttons are connected to the simulator to enable user testing of the machine being simulated. The results associated with the thesis is first of all that usable verification techniques were deduced. Generally speaking, some of these techniques are scalable and are possible to apply for an entire system, while other techniques may be appropriate for selected subsets that needs extra attention. Secondly, an architecture for an automated test system based on a selection of techniques has been constructed for the control system. Last but not least, as a result of this, an implementation of a general test system has been possible and successful. The implemented test system is based on both C# and LabVIEW. What remains regarding the implementation is primarily to extend the system to include the full scope of features described in the architecture and to enable result analysis. / Då verifikationstekniker för Komatu Forests styrsystem utreds angående Valmet skogsmaskiner, hamnar det mjuka realtidssystemkontextet i fokus. Ett sådant kontext antyder en process där empirisk styrkning av kravuppfyllande står i centrum framför formella bevisföringsprocesser. Efter en genomgång och analys av litteratur för mjukvarutestområdet har två paradigmer definierats med avsikten att belysa viktiga concept för mjuka realtidssystem. Paradigmerna är baserade på en abstrakt stimuli/responsmodell, som beskriver ett system med in- och utdata. Eftersom detta system betraktas som en svart låda är inre detaljer gömda, vilket medför att fokus hamnar på ett mer abstrakt plan. Det första paradigmet benämns som “indata-paradigmet” och inriktar sig på vilket data som skickas in i systemet. Det andra paradigmet går under namnet “indatamekanism-paradigmet” och behandlar hur datat skickas in i systemet, dvs fokus placeras på själva inskickarmekanismen. Genom att definiera olika dimensioner för de två paradigmen, är det möjligt att beskriva deras utmärkande drag. Fördelen med att använda denna teoretiska konstruktion är att ett paradigm skapar ett eget teoriområde med sina egna frågeställningar och tekniker. De problem som definierats för detta arbete är främst fokuserade på indatamekanism-paradigmet, där framtagna dimensioner tillämpas. Nya verifikationstekniker deduceras och analyseras baserat på generella mjukvarutestprinciper. Utifrån den skapade teorin skapas en testsystemarkitektur för kontrollsystemet. Sedan utvecklas ett testsystem baserat på arkitekturen samt ett praktiskt scenario med syftet att utreda systemets automationsgrad. Den praktiska miljön för detta arbete kretsar kring en ny simulator under utveckling. Den är baserad på LabVIEW och PXI-teknik och hanterar över 200 I/O. Verkliga maskinkomponenter ansluts till denna miljö tillsammans med konstgjorda komponenter som simulerar motorn, hydralik samt en skog. Utöver detta, ansluts styrspakar och knappar för att möjliggöra användarstyrning av maskinen som simuleras. Resultatet förknippat med detta arbete är för det första användbara verifikationstekniker. Man kan generellt säga att några av dessa tekniker är skalbara och därmed möjliga att tillämpa för ett helt system. Andra tekniker är ej skalbara, men lämpliga att applicera på en systemdelmängd som behöver testas mer utförligt. För det andra, en arkitektur har konstruerats för kontrollsystemet baserat på ett urval av tekniker. Sist men inte minst, som en följd av ovanstående har en lyckad implementation av ett generellt testsystem utförts. Detta system implementerades med hjälp av C# och LabVIEW. Det som återstår beträffande implementationen är att utöka systemet så att alla funktioner som arkitekturen beskriver är inkluderade samt att införa resultatanalys.
217

OFFLINE SCHEDULING OF TASK SETS WITH COMPLEX END-TO-END DELAY CONSTRAINTS

Holmberg, Jonas January 2017 (has links)
Software systems in the automotive domain are generally safety critical and subject to strict timing requirements. Systems of this character are often constructed utilizing periodically executed tasks, that have a hard deadline. In addition, these systems may have additional deadlines that can be specified on cause-effect chains, or simply task chains. They are defined by existing tasks in the system, hence the chains are not stand alone additions to the system. Each chain provide an end-to-end timing constraint targeting the propagation of data through the chain of tasks. These constraints specify the additional timing requirements that need to be fulfilled, when searching for a valid schedule. In this thesis, an offline non-preemptive scheduling method is presented, designed for single core systems. The scheduling problem is defined and formulated utilizing Constraint Programming. In addition, to ensure that end-to-end timing requirements are met, job-level dependencies are considered during the schedule generation. Utilizing this approach can guarantee that individual task periods along with end-to-end timing requirements are always met, if a schedule exists. The results show a good increase in schedulability ratio when utilizing job-level dependencies compared to the case where job-level dependencies are not specified. When the system utilization increases this improvement is even greater. Depending on the system size and complexity the improvement can vary, but in many cases it is more than double. The scheduling generation is also performed within a reasonable time frame. This would be a good benefit during the development process of a system, since it allows fast verification when changes are made to the system. Further, the thesis provide an overview of the entire process, starting from a system model and ending at a fully functional schedule executing on a hardware platform.
218

Provisão integrada de QoS relativa e absoluta em serviços computacionais interativos com requisitos de responsividade de tempo real / Integrated provision of relative and absolute QoS in interative computer services with real-time responsiveness requirements

Priscila Tiemi Maeda Saito 04 March 2010 (has links)
Aplicações de sistemas computacionais emergentes atribuindo requisitos de resposta na forma de tempo de resposta requerem uma abordagem de sistemas de tempo real. Nesses sistemas, a qualidade de serviço é expressa como garantia das restrições temporais. Um amplo leque de técnicas para provisão de QoS encontram-se na literatura. Estas técnicas são baseadas tanto na diferenciação de serviço (QoS relativa), quanto na especificação de garantia de desempenho (QoS absoluta). Porém, a integração de QoS relativa e absoluta em nível de aplicação não tem sido tão explorada. Este trabalho realiza o estudo, a análise e a proposta de um método de escalonamento de tempo real em um ambiente simulado, baseado em contratos virtuais adaptativos e modelo re-alimentado. O objetivo é relaxar as restrições temporais dos usuários menos exigentes e priorizar usuários mais exigentes, sem degradar a qualidade do sistema como um todo. Para tanto, estratégias são exploradas em nível de escalonamento para o cumprimento dos contratos especificados por requisitos de tempo médio de resposta. Os resultados alcançados com o emprego do método proposto sinalizam uma melhoria em termos de qualidade de serviço relativa e absoluta e uma melhor satisfação dos usuários. Este trabalho também propõe uma extensão para os modelos convencionalmente estudados nesse contexto, ampliando a formulação original de duas classes para n classes de serviços / Emerging computer system application posing responsiveness requirement in the form of response time demand a real-time system approach. In these systems, the quality of service is expressed as guarantees on time constraints. A wide range of techniques for QoS provision is found in the literature. These techniques are based both on either service differentiation (relative QoS) or specification of performance guaranteeS (absolute QoS). However, integrated provision of both relative and absolute QoS at application level is not as well explored. This work conducts the study, analysis and proposal of a real time scheduling method in a simulated environment. This method is based on adaptive virtual contracts and feedback model. The goal is to relax the time constraints of less demanding users and prioritize the time constraints of most demanding users, without degrading the quality of the system as a whole. Strategies toward this goal are exploited in the system scheduling level and are aimed at the problem of fulfulling service-level agreements specifying average response times requirements. The results achieved with the proposed method indicate an improvement in relative and absolute QoS and a better user satisfaction. This work also proposes an extension to the conventional models studied in this context, extending the original formulation of two classes for n classes of services
219

A simulation-based approach to test the performance of large-scale real time software systems

Waqas, Muhammad January 2020 (has links)
Background: A real-time system operates with time constraints, and its correctness depends upon the time on which results are generated. Different industries use different types of real-time systems such as telecommunication, air traffic control systems, generation of power, and spacecraft system. There is a category of real-time systems that are required to handle millions of users and operations at the same time; those systems are called large scale real-time systems. In the telecommunication sector, many real-time systems are large scale, as they need to handle millions of users and resources in parallel. Performance is an essential aspect of this type of system; unpredictable behavior queue cost millions of dollars for telecom operators in a matter of seconds. The problem is that existing models for performance analysis of these types of systems are not cost-effective and require lots of knowledge to deploy. In this context, we have developed a performance simulator tool that is based on the XgBoost, Random Forest, and Decision Tree modeling. Objectives: The thesis aims to develop a cost-effective approach to support the analysis of the performance of large-scale real-time telecommunication systems. The idea is to develop and implement a solution to simulate the telecommunication system using some of the most promising identified factors that affect the performance of the system. Methods: We have performed an improvement case study in Ericsson. The identification of performance factors is found through a dataset generated in a performance testing session, the investigation conducted on the same system, and unstructured interviews with the system experts. The approach was selected through a literature review. Validation of the Performance Simulator performed through static analysis and user feedback received from the questionnaire. Results: The results show that Performance Simulator could be helpful to performance analysis of large-scale real-time telecommunication systems. Performance Simulator ability to endorse performance analysis of other real-time systems is a collection of multiple opinions. Conclusions: The developed and validated approach demonstrates potential usefulness in performance analysis and can benefit significantly from further enhancements. The specific amount of data used for training might impact the generalization of the research on other real-time systems. In the future, this study can establish with more numbers of input on real-time systems on a large scale.
220

Deployment of mixed criticality and data driven systems on multi-cores architectures / Déploiement de systèmes à flots de données en criticité mixte pour architectures multi-coeurs

Medina, Roberto 30 January 2019 (has links)
De nos jours, la conception de systèmes critiques va de plus en plus vers l’intégration de différents composants système sur une unique plate-forme de calcul. Les systèmes à criticité mixte permettent aux composants critiques ayant un degré élevé de confiance (c.-à-d. une faible probabilité de défaillance) de partager des ressources de calcul avec des composants moins critiques sans nécessiter des mécanismes d’isolation logicielle.Traditionnellement, les systèmes critiques sont conçus à l’aide de modèles de calcul comme les graphes data-flow et l’ordonnancement temps-réel pour fournir un comportement logique et temporel correct. Néanmoins, les ressources allouées aux data-flows et aux ordonnanceurs temps-réel sont fondées sur l’analyse du pire cas, ce qui conduit souvent à une sous-utilisation des processeurs. Les ressources allouées ne sont ainsi pas toujours entièrement utilisées. Cette sous-utilisation devient plus remarquable sur les architectures multi-cœurs où la différence entre le meilleur et le pire cas est encore plus significative.Le modèle d’exécution à criticité mixte propose une solution au problème susmentionné. Afin d’allouer efficacement les ressources tout en assurant une exécution correcte des composants critiques, les ressources sont allouées en fonction du mode opérationnel du système. Tant que des capacités de calcul suffisantes sont disponibles pour respecter toutes les échéances, le système est dans un mode opérationnel de « basse criticité ». Cependant, si la charge du système augmente, les composants critiques sont priorisés pour respecter leurs échéances, leurs ressources de calcul augmentent et les composants moins/non critiques sont pénalisés. Le système passe alors à un mode opérationnel de « haute criticité ».L’ intégration des aspects de criticité mixte dans le modèle data-flow est néanmoins un problème difficile à résoudre. Des nouvelles méthodes d’ordonnancement capables de gérer des contraintes de précédences et des variations sur les budgets de temps doivent être définies.Bien que plusieurs contributions sur l’ordonnancement à criticité mixte aient été proposées, l’ordonnancement avec contraintes de précédences sur multi-processeurs a rarement été étudié. Les méthodes existantes conduisent à une sous-utilisation des ressources, ce qui contredit l’objectif principal de la criticité mixte. Pour cette raison, nous définissons des nouvelles méthodes d’ordonnancement efficaces basées sur une méta-heuristique produisant des tables d’ordonnancement pour chaque mode opérationnel du système. Ces tables sont correctes : lorsque la charge du système augmente, les composants critiques ne manqueront jamais leurs échéances. Deux implémentations basées sur des algorithmes globaux préemptifs démontrent un gain significatif en ordonnançabilité et en utilisation des ressources : plus de 60 % de systèmes ordonnançables sur une architecture donnée par rapport aux méthodes existantes.Alors que le modèle de criticité mixte prétend que les composants critiques et non critiques peuvent partager la même plate-forme de calcul, l'interruption des composants non critiques réduit considérablement leur disponibilité. Ceci est un problème car les composants non critiques doivent offrir une degré minimum de service. C’est pourquoi nous définissons des méthodes pour évaluer la disponibilité de ces composants. A notre connaissance, nos évaluations sont les premières capables de quantifier la disponibilité. Nous proposons également des améliorations qui limitent l’impact des composants critiques sur les composants non critiques. Ces améliorations sont évaluées grâce à des automates probabilistes et démontrent une amélioration considérable de la disponibilité : plus de 2 % dans un contexte où des augmentations de l’ordre de 10-9 sont significatives.Nos contributions ont été intégrées dans un framework open-source. Cet outil fournit également un générateur utilisé pour l’évaluation de nos méthodes d’ordonnancement. / Nowadays, the design of modern Safety-critical systems is pushing towards the integration of multiple system components onto a single shared computation platform. Mixed-Criticality Systems in particular allow critical components with a high degree of confidence (i.e. low probability of failure) to share computation resources with less/non-critical components without requiring software isolation mechanisms (as opposed to partitioned systems).Traditionally, safety-critical systems have been conceived using models of computations like data-flow graphs and real-time scheduling to obtain logical and temporal correctness. Nonetheless, resources given to data-flow representations and real-time scheduling techniques are based on worst-case analysis which often leads to an under-utilization of the computation capacity. The allocated resources are not always completely used. This under-utilization becomes more notorious for multi-core architectures where the difference between best and worst-case performance is more significant.The mixed-criticality execution model proposes a solution to the abovementioned problem. To efficiently allocate resources while ensuring safe execution of the most critical components, resources are allocated in function of the operational mode the system is in. As long as sufficient processing capabilities are available to respect deadlines, the system remains in a ‘low-criticality’ operational mode. Nonetheless, if the system demand increases, critical components are prioritized to meet their deadlines, their computation resources are increased and less/non-critical components are potentially penalized. The system is said to transition to a ‘high-criticality’ operational mode.Yet, the incorporation of mixed-criticality aspects into the data-flow model of computation is a very difficult problem as it requires to define new scheduling methods capable of handling precedence constraints and variations in timing budgets.Although mixed-criticality scheduling has been well studied for single and multi-core platforms, the problem of data-dependencies in multi-core platforms has been rarely considered. Existing methods lead to poor resource usage which contradicts the main purpose of mixed-criticality. For this reason, our first objective focuses on designing new efficient scheduling methods for data-driven mixed-criticality systems. We define a meta-heuristic producing scheduling tables for all operational modes of the system. These tables are proven to be correct, i.e. when the system demand increases, critical components will never miss a deadline. Two implementations based on existing preemptive global algorithms were developed to gain in schedulability and resource usage. In some cases these implementations schedule more than 60% of systems compared to existing approaches.While the mixed-criticality model claims that critical and non-critical components can share the same computation platform, the interruption of non-critical components degrades their availability significantly. This is a problem since non-critical components need to deliver a minimum service guarantee. In fact, recent works in mixed-criticality have recognized this limitation. For this reason, we define methods to evaluate the availability of non-critical components. To our knowledge, our evaluations are the first ones capable of quantifying availability. We also propose enhancements compatible with our scheduling methods, limiting the impact that critical components have on non-critical ones. These enhancements are evaluated thanks to probabilistic automata and have shown a considerable improvement in availability, e.g. improvements of over 2% in a context where 10-9 increases are significant.Our contributions have been integrated into an open-source framework. This tool also provides an unbiased generator used to perform evaluations of scheduling methods for data-driven mixed-criticality systems.

Page generated in 0.0496 seconds