• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 5
  • 5
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 50
  • 50
  • 50
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Kuro kolonėlių valdymo sistemos tyrimas / The analysis of fuel pump management system

Vaičys, Vytautas 25 May 2006 (has links)
This document is a master’s thesis analyzing an automated fuel pump management system. In the first chapters we take a general overview of the system and the main problems that we will be facing during the planning and design phases of the project. Later we propose and analyze possible solutions for these problems. Technical system information is revealed in the later chapters. Functional and non functional requirements are discussed along with the main UML diagrams. The research phase of the thesis provides detailed system analysis and software quality reports which later are used to create proposed changes to the system. These changes are analyzed, designed and coded in the final - experimental part of the thesis. The main proposal is to convert the system architecture from flow driven to event driven. This change helps to solve several uncovered architectural problems as well as improve the general quality of the system. These changes are tested and analyzed in the experimental chapter of the thesis and finally the conclusions are made. The main conclusion is that the proposed architectural changes were chosen correctly. This is also supported by the experimental data.
12

Elgesiu paremtos robotikos simuliavimo aplinkų, skirtų programavimo mokymui, tyrimas / Research on behavior-based robotics simulation environments, dedicated to teaching programming

Vaikšnys, Dainius 06 August 2014 (has links)
Roboto kūrimas yra nepaprastai brangus procesas. Sukonstruoti prototipą sunaudojama daug lėšų, o paaiškėjus trūkumams gali tekti jį perdaryti ne vieną kartą. Nemažiau sudėtingas yra algoritmų kūrimas roboto valdymui. Išbandyti valdančią programą su realiu prototipu sudėtinga, nes klaidos gali sukelti tiek smulkių gedimų tiek stipriai apgadinti įrangą. Šiame dokumente yra gilinamasi į roboto elgesio simuliavimą virtualioje erdvėje. Nagrinėjamos roboto simuliavimo aplinkos, teikiančios galimybę suprogramuoti ir išbandyti robotą valdančius algoritmus. Kreipiamas didelis dėmesys į tokios sistemos naudojimo paprastumą ir galimybę ją naudoti moksleivių skatinimui domėtis programavimu. Autorius pateikia projektą naujos simuliavimo aplinkos kūrimui ir realizuoja šią sistemą. Atliekamas tyrimas siekiant nustatyti kaip yra kuriamos roboto valdančios programos. Yra palyginama naujai sukurtos sistemos galimybė realizuoti valdymo algoritmus su rinkoje esančiu komerciniu sprendimu – Webots. Yra pateikiamos dvi roboto konstrukcijos su apibrėžtu roboto elgesiu ir suprogramuojamas valdymo kodas abejose sistemose. Roboto elgesio reikalavimuose atliekamas pakeitimas ir tiriama kokią įtaką tai turi robotą valdančios programos kodui. Galiausiai yra pristatoma naujai sukurta roboto simuliavimo aplinka. Aprašomas sistemos vartotojo vadovas, paaiškinama grafinė vartotojo sąsaja. / The process of robot creation is extremely expensive. It requires a lot of expenses to build a prototype and mistakes in design can lead to several rebuilds. No less expensive is a challenge of designing control program for the robot, as testing it with a prototype can cause harm to the construction or even destroy it. This document focuses on the simulation of robot behavior in virtual environment. Egzisting simulation environmets, that allow programming and simulating of control scripts, are being explored here. A lot of attention is paid to simplicity of the software interface and possible usage for children to learn and enjoy programming. The author makes a proposal for new simulation environment and implements the software. The research is made on the implementation of control programs for robots. The way how the control program is implemented is compared to an egzisting comercial simulation environment – Webots. The two specifications for robot design and behavior is provided and the solutions for control programs are designed and compared. The modification in robot specifications are brought up and those solutions are modified to comply with the change. Finally the designed software is presented and explained. The chapter contains user guide, a specification for graphical user interface and an explanation of it‘s usage.
13

Supervision en ligne de propriétés temporelles dans les systèmes distribués temps-réel / Online monitoring of temporal properties in distributed real-time system

Baldellon, Olivier 07 November 2014 (has links)
Les systèmes actuels deviennent chaque jour de plus en plus complexe; à la distribution s’ajoutent les contraintes temps réel. Les méthodes classiques en charge de garantir la sûreté de fonctionnement, comme le test, l’injection de fautes ou les méthodes formelles ne sont plus suffisantes à elles seules. Afin de pouvoir traiter les éventuelles erreurs lors de leur apparition dans un système distribué donné, nous désirons mettre en place un programme, surveillant ce système, capable de lancer une alerte lorsque ce dernier s’éloigne de ses spécifications ; un tel programme est appelé superviseur (ou moniteur). Le fonctionnement d’un superviseur consiste simplement à interpréter un ensemble d’informations provenant du système sous forme de message, que l’on qualifiera d’évènement, et d’en déduire un diagnostic. L’objectif de cette thèse est de mettre un place un superviseur distribué permettant de vérifier en temps réel des propriétés temporelles. En particulier nous souhaitons que notre moniteur soit capable de vérifier un maximum de propriétés avec un minimum d’information. Ainsi notre outil est spécialement conçu pour fonctionner parfaitement même si l’observation est imparfaite, c’est-à-dire, même si certains évènements arrivent en retard ou s’ils ne sont jamais reçus. Nous avons de plus cherché à atteindre cet objectif de manière distribuée pour des raisons évidentes de performance et de tolérance aux fautes. Nous avons ainsi proposé un protocole distribuable fondé sur l’exécution répartie d’un réseau de Petri temporisé. Pour vérifier la faisabilité et l’efficacité de notre approche, nous avons mis en place une implémentation appelée Minotor qui s’est révélée avoir de très bonnes performances. Enfin, pour montrer l’expressivité du formalisme utilisé pour exprimer les spécifications que l’on désire vérifier, nous avons détaillé un ensemble de propriétés sous forme de réseaux de Petri à double sémantique introduite dans cette thèse (l’ensemble des transitions étant partitionné en deux catégories de transitions, chacune de ces parties ayant sa propre sémantique). / Current systems are becoming every day more and more complex, being both distributed and real-timed. Conventional methods responsible for guaranteeing dependability, such as testing, fault injection or formal methods are no longer sufficient. In order to process any errors when they appear in a given distributed system, we want to implement a software monitoring it and capable of launching an alert when the system does not respect anymore its specification. Such a program is called monitor. A monitor interpret information received from the system as messages (these messages are called events) and propose a diagnosis. The objective of this thesis is to set in place a monitor for a distributed real-time verification of temporal properties. In particular we want our monitor to be able to check up a maximum of properties with a minimum of information. Thus, our tools are designed to work perfectly even if the observation is imperfect, that is to say, even if some events are late or never received. We have also managed to achieve this goal through a highly distributed protocol. To verify the feasibility and effectiveness of our approach, we have established an implementation called Minotor who was found to have very good performance. Finally, we detailed a set of properties, expressed in our formalism, to show it’s expressiveness.
14

A novel method for the Approximation of risk of Blackout in operational conditions / Une nouvelle méthode pour le rapprochement des risques de "Blackout" dans des conditions opérationnelles

Urrego Agudelo, Lilliam 04 November 2016 (has links)
L'industrie de l'électricité peut être caractérisée par plusieurs risques: la réglementation, la capacité, l'erreur humaine, etc. L'un des aspects les plus remarquables, en raison de leur impact, est lié à ne pas répondre à la demande (DNS).Pour éviter les défaillances en cascade, des critères déterministes comme la N-1 ont été appliquées, ce qui permet d'éviter la défaillance initial. Après une défaillance en cascade, des efforts considérables doivent être faits pour analyser les défauts afin de minimiser la possibilité d'un événement similaire.En dépit de tous ces efforts, des blackouts peuvent encore se produire. En effet, il est un défi, en raison du grand nombre d'interactions possibles et de leur diversité et complexité, pour obtenir une bonne prédiction d'une situation.Dans notre travail, une nouvelle méthodologie est proposée pour estimer le risque de blackout en utilisant des modèles de systèmes complexes. Cette approche est basée sur l'utilisation de variables qui peuvent être des précurseurs d'un événement DNS. Il est basé sur l'étude de la dépendance ou de corrélation entre les variables impliquées dans le risque de blackout, et la caractéristique d’auto-organisation critique (SOC) des systèmes complexes.La VaR est calculé en utilisant les données du système colombien et le coût du rationnement, y compris les variables économiques dans les variables techniques. Traditionnellement le risque augmente avec la racine carrée du temps, mais avec des séries de données que présente un comportement complexe, le taux de croissance est plus élevé.Une fois que les conditions de SOC sont déterminées, un Model de Flux de Puissance Statistique SPFM a été exécuté pour simuler le comportement du système et de ses variables pour les performances du système électrique. Les simulations ont été comparées aux résultats du comportement de fonctionnement réel du système de puissance électrique.Le flux de puissance DC est un modèle simplifié, ce qui représente le phénomène complexe de façon simple, néglige cependant certains aspects des événements de fonctionnement du système qui peut se produire dans les blackouts. La représentation des défaillances en cascade et de l'évolution du réseau électrique dans un modèle simple, permet l'analyse des relations temporaires dans l'exploitation des réseaux électriques, en plus de l'interaction entre la fiabilité à court terme et à long terme (avec un réseau d'amélioration). Cette méthodologie est axée sur la planification opérationnelle du lendemain (jour d'avance sur le marché), mais il peut être appliqué à d'autres échelles de temps.Les résultats montrent que le comportement complexe avec une loi de puissance et l'indice de Hurst est supérieur à 0,5. Les simulations basées sur notre modèle ont le même comportement que le comportement réel du système.En utilisant la théorie de la complexité, les conditions SOC doivent être mis en place pour le marché analysé du lendemain. Ensuite, une simulation inverse est exécutée, où le point final de la simulation est la situation actuelle du système, et permet au système d'évoluer et de répondre aux conditions requises par la caractéristique d’auto-organisation critique en un point de fonctionnement souhaité.Après avoir simulé le critère de fiabilité utilisé dans l’exploitation du système électrique pour les défaillances en cascade, ils sont validés par des défaillances historiques obtenues à partir du système électrique. Ces résultats, permettent l'identification des lignes avec la plus grande probabilité de défaillance, la séquence des événements associés, et quelles simulations d'actions d’exploitation ou d'expansion, peuvent réduire le risque de défaillance du réseau de transmission.Les possibles avantages attendus pour le réseau électrique sont l'évaluation appropriée du risque du réseau, l'augmentation de la fiabilité du système, et un progrès de la planification du risque du lendemain et connaissance de la situation / The electricity industry can be characterized by several risks: regulatory, adequacy, human error, etc. One of the most outstanding aspects, because of their impact, is related to not supply demand (DNS).To prevent cascading failures, particularly in reliability studies, determinist criteria were applied, such as N-1, which allows to avoid the initial event of failure in the planning and operation of the system. In general, analysis tools for these preventive actions are applied separately for the planning and for the system operation of an electric power. After a cascading failure, considerable efforts must be done to analyze faults to minimize the possibility of a similar event.In spite of all these efforts, blackouts or large cascading failures still happen, although events are considered to be rare due to the efforts of the industry. Indeed, it is a challenge from the point of view of analysis and simulation, due to the large number of possible interactions and their diversity and complexity, to obtain a good prediction of a situation.In our work, a new methodology is proposed to estimate the blackout risk using complex systems models. This approach is based on the use of variables that can be precursors of a DNS event. In other terms, it is based on the study of the dependence or correlation between the variables involved in the blackout risk, and the self organized critically (SOC) property of complex systems.VaR is calculate using the data from the Colombian system and the cost of rationing, for estimate the cost of blackout including economic variables into the technical variables. In addition, traditionally the risk grows with the root square of the time, but with data series than has complex behavior, the rate of growing is higher.Once the SOC conditions are determined, a Statistical Power Flow Model SPFM was executed to simulate the behavior of the system and its variables for the performance of the electrical system. Simulations results were compared to the real operation behavior of the electrical power system.The DC power flow is a simplified model, which represents the complex phenomenon in a simple way, however neglects some aspects of the events of operation of the system that can happen in blackouts. The representation of cascading failures and evolution of the network in a simple model allows the analysis of the temporary relations in the operation of electrical networks, besides the interaction between reliability of short-term and long-term (with improvements network).. This methodology is focused on the operational planning of the following day (day ahead market), but it can be applied to other time scales.The results show that the complex behavior with a power law and the Hurst index is greater than 0.5. The simulations based on our model have the same behavior as the real behavior of the system.For using the complexity theory, the SOC conditions must be established for the day ahead analyzed market. Then an inverse simulation is executed, where the endpoint of the simulation is the current situation of the system, and allows the system to evolve and meet the requisites of criticality auto-organized in a desired point of operation.After simulating the criterion of reliability used in the operation of the electrical system for cascading failures, they are validated by historical failures obtained from the electrical system. These results, allow the identification of lines with the biggest probability to fail, the sequence of associate events, and what simulations of actions of operation or expansion, can reduce the risk of failures of the transmission network.The possible advantage expected for the electrical network are the appropriate evaluation of the risk of the network, the increase the reliability of the system (probabilistic analysis), and a progress of the planning of the risk of the day ahead (holistic analysis) and situational awareness
15

Group-EDF: A New Approach and an Efficient Non-Preemptive Algorithm for Soft Real-Time Systems

Li, Wenming 08 1900 (has links)
Hard real-time systems in robotics, space and military missions, and control devices are specified with stringent and critical time constraints. On the other hand, soft real-time applications arising from multimedia, telecommunications, Internet web services, and games are specified with more lenient constraints. Real-time systems can also be distinguished in terms of their implementation into preemptive and non-preemptive systems. In preemptive systems, tasks are often preempted by higher priority tasks. Non-preemptive systems are gaining interest for implementing soft-real applications on multithreaded platforms. In this dissertation, I propose a new algorithm that uses a two-level scheduling strategy for scheduling non-preemptive soft real-time tasks. Our goal is to improve the success ratios of the well-known earliest deadline first (EDF) approach when the load on the system is very high and to improve the overall performance in both underloaded and overloaded conditions. Our approach, known as group-EDF (gEDF), is based on dynamic grouping of tasks with deadlines that are very close to each other, and using a shortest job first (SJF) technique to schedule tasks within the group. I believe that grouping tasks dynamically with similar deadlines and utilizing secondary criteria, such as minimizing the total execution time can lead to new and more efficient real-time scheduling algorithms. I present results comparing gEDF with other real-time algorithms including, EDF, best-effort, and guarantee scheme, by using randomly generated tasks with varying execution times, release times, deadlines and tolerances to missing deadlines, under varying workloads. Furthermore, I implemented the gEDF algorithm in the Linux kernel and evaluated gEDF for scheduling real applications.
16

Reliabilibity Aware Thermal Management of Real-time Multi-core Systems

Xu, Shikang 18 March 2015 (has links)
Continued scaling of CMOS technology has led to increasing working temperature of VLSI circuits. High temperature brings a greater probability of permanent errors (failure) in VLSI circuits, which is a critical threat for real-time systems. As the multi-core architecture is gaining in popularity, this research proposes an adaptive workload assignment approach for multi-core real-time systems to balance thermal stress among cores. While previously developed scheduling algorithms use temperature as the criterion, the proposed algorithm uses reliability of each core in the system to dynamically assign tasks to cores. The simulation results show that the proposed algorithm gains as large as 10% benefit in system reliability compared with commonly used static assignment while algorithms using temperature as criterion gain 4%. The reliability difference between cores, which indicates the imbalance of thermal stress on each core, is as large as 25 times smaller when proposed algorithm is applied.
17

An Analysis to Identify the Factors thatImpact the Performance of Real-TimeSoftware Systems : A Systematic mapping study and Case Study

Bejawada, Sravani January 2020 (has links)
Background: Many organizations lack the time, resources, or experience to derive a myriad of input factors impacting performance. Instead, developers use the trial and error approach to analyze the performance. The trial and error approach is difficult and time taking process when working with complex systems. Many factors impact the performance of real-time software systems. But the most important factors which impact the performance of the real-time software systems are identified in this research paper. Black box (performance) testing focuses solely on the outputs generated in response to the factors supplied while neglecting the internal components of the software. Objectives: The objective of this research is to identify the most important factors which impact the performance of real-time software systems. Identifying these factors helps developers in improving the performance of real-time software systems. The context in which the objective is achieved is an Online charging system, which is one of the software in Business support systems. In context, real-time systems, the traffic changes in a fraction of seconds, so it is important measuring the performance of these systems. Latency is also one of the major factors which impact the performance of any real-time system. Additionally, another motivation for this research is to explore a few other major factors which impact the performance. Methods: Systematic Mapping Study (SMS) and case study were conducted to identify the important factors which impact the performance of real-time software systems. Both the data collection methods, a survey and interviews were designed and executed to collect the qualitative data. Survey and interviews were conducted among 12 experienced experts who have prior knowledge regarding the performance of the system to know the most important factors that impact the performance of the online charging system. The qualitative data collected from the case study are categorized by using thematic analysis. From the logs, i.e., quantitative data collected from industry was analyzed by using random forest feature importance algorithm to identify the factors which have the highest impact on the performance of the online charging system. Results: Systematic mapping study was conducted in the literature to review the existing literature; 22 factors are identified from 21 articles. 12 new factors are identified from the survey, which was previously not identified in the literature study. From the available quantitative data based on the performance impact of the factors on the system, the factors are identified. Knowing these factors helps the developers to resolve the performance issues by allocating more number of virtual machines, thereby the performance of the system can be improved and also the behaviour of the system can be known. All these results are purely based on the expert's opinions. Conclusions: This study identifies the most important factors that impact the performance of real-time software systems. The identified factors are mostly technical factors such as CPU utilization, Memory, Latency, etc.. The objectives are addressed by selecting suitable research methods.
18

Minimising shared resource contention when scheduling real-time applications on multi-core architectures / Minimiser l’impact des communications lors de l’ordonnancement d’application temps-réels sur des architectures multi-cœurs

Rouxel, Benjamin 19 December 2018 (has links)
Les architectures multi-cœurs utilisant des mémoire bloc-notes sont des architectures attrayantes pour l'exécution des applications embarquées temps-réel, car elles offrent une grande capacité de calcul. Cependant, les systèmes temps-réel nécessitent de satisfaire des contraintes temporelles, ce qui peut être compliqué sur ce type d'architectures à cause notamment des ressources matérielles physiquement partagées entre les cœurs. Plus précisément, les scénarios de pire cas de partage du bus de communication entre les cœurs et la mémoire externe sont trop pessimistes. Cette thèse propose des stratégies pour réduire ce pessimisme lors de l'ordonnancement d'applications sur des architectures multi-cœurs. Tout d'abord, la précision du pire cas des coûts de communication est accrue grâce aux informations disponibles sur l'application et l'état de l'ordonnancement en cours. Ensuite, les capacités de parallélisation du matériel sont exploitées afin de superposer les calculs et les communications. De plus, les possibilités de superposition sont accrues par le morcellement de ces communications. / Multi-core architectures using scratch pad memories are very attractive to execute embedded time-critical applications, because they offer a large computational power. However, ensuring that timing constraints are met on such platforms is challenging, because some hardware resources are shared between cores. When targeting the bus connecting cores and external memory, worst-case sharing scenarios are too pessimistic. This thesis propose strategies to reduce this pessimism. These strategies offer to both improve the accuracy of worst-case communication costs, and to exploit hardware parallel capacities by overlapping computations and communications. Moreover, fragmenting the latter allow to increase overlapping possibilities.
19

Enriching Enea OSE for Better Predictability Support

Ul Mustafa, Naveed January 2011 (has links)
A real-time application is designed as a set of tasks with specific timing attributes and constraints. These tasks can be categorized as periodic, sporadic or aperiodic, based on the timing attributes that are specified for them which in turn define their runtime behaviors. To ensure correct execution and behavior of the task set at runtime, the scheduler of the underlying operating system should take into account the type of each task (i.e.,  periodic, sporadic, aperiodic). This is important so that the scheduler can schedule the task set in a predictable way and be able to allocate CPU time to each task appropriately in order for them to achieve their timing constraints. ENEA OSE is a real-time operating system with fixed priority preemptive scheduling policy which is used heavily in embedded systems, such as telecommunication systems developed by Ericsson. While OSE allows for specification of priority levels for tasks and schedules them accordingly, it can not distinguish between different types of tasks. This thesis work investigates mechanisms to build a scheduler on top of OSE, which can identify three types of real-time tasks and schedule them in a more predictable way. The scheduler can also monitor behavior of task set at run-time and invoke violation handlers if time constraints of a task are violated. The scheduler is implemented on OSE5.5 soft kernel. It identifies periodic, aperiodic and sporadic tasks. Sporadic and aperiodic tasks can be interrupt driven or program driven. The scheduler implements EDF and RMS as scheduling policy of periodic tasks. Sporadic and aperiodic tasks can be scheduled using polling server or background scheme. Schedules generated by the scheduler  deviate from expected timing behavior due to scheduling overhead. Approaches to reduce deviation are suggested as future extension of thesis work. Usability of the scheduler can be increased by extending the scheduler to support other scheduling algorithm in addition to RMS and EDF. / CHESS
20

A Study Of Genetic Representation Schemes For Scheduling Soft Real-Time Systems

Bugde, Amit 13 May 2006 (has links)
This research presents a hybrid algorithm that combines List Scheduling (LS) with a Genetic Algorithm (GA) for constructing non-preemptive schedules for soft real-time parallel applications represented as directed acyclic graphs (DAGs). The execution time requirements of the applications' tasks are assumed to be stochastic and are represented as probability distribution functions. The performance in terms of schedule lengths for three different genetic representation schemes are evaluated and compared for a number of different DAGs. The approaches presented in this research produce shorter schedules than HLFET, a popular LS approach for all of the sample problems. Of the three genetic representation schemes investigated, PosCT, the technique that allows the GA to learn which tasks to delay in order to allow other tasks to complete produced the shortest schedules for a majority of the sample DAGs.

Page generated in 0.0437 seconds