Spelling suggestions: "subject:"embedded systems"" "subject:"imbedded systems""
401 |
Aide à l'Analyse de Traces d'Exécution dans le Contexte des MicrocontrôleursAmiar, Azzeddine 27 November 2013 (has links) (PDF)
Souvent, dû a l'aspect cyclique des programmes embarqu és, les traces de microcontrôleurs contiennent beaucoup de donn ées. De plus, dans notre contexte de travail, pour l'analyse du comportement, une seule trace se terminant sur une défaillance est disponible. L'objectif du travail pr esent é dans cette th ese est d'aider à l'analyse de trace de microcontrôleurs. La premi ère contribution de cette th èse concerne l'identifi cation de cycles, ainsi que la g én ération d'une description pertinente de la trace. La d étection de cycles repose sur l'identifi cation du loop- header. La description propos ée à l'ing enieur est produite en utilisant la compression bas ée sur la g én ération d'une grammaire. Cette derni ère permet la d etection de r ép étitions dans la trace. La seconde contribution concerne la localisation de faute(s). Elle est bas ée sur l'analogie entre les ex écutions du programme et les cycles. Ainsi, pour aider dans l'analyse de la trace, nous avons adapt é des techniques de localisation de faute(s) bas ée sur l'utilisation de spectres. Nous avons aussi d éfi ni un processus de filtrage permettant de r éduire le nombre de cycles àa utiliser pour la localisation de faute(s). Notre troisi ème contribution concerne l'aide a l'analyse des cas o ù les multiples cycles d'une même ex écution interagissent entre eux. Ainsi, pour faire de la localisation de faute(s) pour ce type de cas, nous nous int eressons à la recherche de r égles d'association. Le groupement des cycles en deux ensembles (cycles suspects et cycles corrects) pour la recherche de r égles d'association, permet de d e finir les comportements jug és correctes et ceux jug és comme suspects. Ainsi, pour la localisation de faute(s), nous proposons à l'ing enieur un diagnostic bas é sur l'analyse des r égles d'association selon leurs degr és de suspicion. Cette th èse pr esente également les évaluations men ées, permettant de mesurer l'efficacit e de chacune des contributions discut ées, et notre outil CoMET. Les r ésultats de ces évaluations montrent l'e fficacit e de notre travail d'aide à l'analyse de traces de microcontrôleurs.
|
402 |
Dependable Cyber-Physical SystemsKim, Junsung 01 May 2014 (has links)
CPS (Cyber-Physical Systems) enable a new class of applications that perceive their surroundings using raw data from sensors, monitor the timing of dynamic processes, and control the physical environment. Since failures and misbehaviors in application domains such as cars, medical devices, nuclear power plants, etc., may cause significant damage to life and/or property, CPS need to be safe and dependable. A conventional way of improving dependability is to use redundant hardware to replicate the whole (sub)system. Although hardware replication has been widely deployed in conventional mission-critical systems, it is cost-prohibitive to many emerging CPS application domains. Hardware replication also leads to limited system flexibility. This dissertation studies the problem of making CPS affordably dependable and develops a system-level framework that manages critical CPS resources including processors, networks, and sensors. Our framework called SAFER (System-level Architecture for Failure Evasion in Real-time applications) incorporates configurable software mechanisms and policies to tolerate failures of critical CPS resources while meeting their timing constraints. It supports adaptive graceful degradation, the effective use of different sensor modalities, and the fault-tolerant schemes of hot standby, cold standby, and re-execution. SAFER reliably and efficiently allocates tasks and their backups to CPU and sensor resources while satisfying network traffic constraints. It also fuses and (re)configures sensor data used by tasks to recover from system failures. The SAFER framework aims to guarantee the timeliness of different types of tasks that fall into one of four categories: (1) tasks with periodic arrivals, (2) tasks with continually varying periods, (3) tasks with parallel threads, and (4) tasks with self-suspensions. We offer the schedulability analyses and runtime support for such tasks with and without resource failures. Finally, the functionality of the proposed system is evaluated on a self-driving car using SAFER. We conclude that the proposed framework analytically satisfies timing constraints and predictably operates systems with and without resource failures, hence making CPS dependable and timely.
|
403 |
Une méthode globale pour la vérification d'exigences temps réel : application à l'Avionique Modulaire IntégréeLauer, Michaël 12 June 2012 (has links) (PDF)
Dans le domaine de l'aéronautique, les systèmes embarqués ont fait leur apparition durant les années 60, lorsque les équipements analogiques ont commencé à être remplacés par leurs équivalents numériques. Dès lors, l'engouement suscité par les progrès de l'informatique fut tel que de plus en plus de fonctionnali- tés ont été numérisées. L'accroissement permanent de la complexité des systèmes a conduit à la définition d'une architecture appelée Avionique Modulaire Intégrée (IMA pour Integrated Modular Avionics). Cette architecture se distingue des architectures antérieures, car elle est fondée sur des standards (ARINC 653 et ARINC 664 partie 7) permettant le partage des ressources de calcul et de communication entre les différentes fonctions avioniques. Ce type d'architecture est appliqué aussi bien dans le domaine civil avec le Boeing B777 et l'Airbus A380, que dans le domaine militaire avec le Rafale ou encore l'A400M. Pour des raisons de sûreté, le comportement temporel d'un système s'appuyant sur une architecture IMA doit être prévisible. Ce besoin se traduit par un ensemble d'exigences temps réel que doit satisfaire le système. Le problème exploré dans cette thèse concerne la vérification d'exigences temps réel dans les systèmes IMA. Ces exigences s'articulent autour de chaînes fonctionnelles, qui sont des séquences de fonctions. Une exigence spécifie alors une borne acceptable (minimale ou maximale) pour une propriété temporelle d'une ou plusieurs chaînes fonctionnelles. Nous avons identifié trois catégories d'exigences temps réel, que nous considérons pertinentes vis-à-vis des systèmes étudiés. Il s'agit des exigences de latence, de fraîcheur et de cohérence. Nous proposons une modélisation des systèmes IMA, et des exigences qu'ils doivent satisfaire, dans le formalisme du tagged signal model. Nous montrons alors comment, à partir de ce modèle, nous pouvons générer pour chaque exigence un programme linéaire mixte, c'est-à-dire contenant à la fois des variables entières et réelles, dont la solution optimale permet de vérifier la satisfaction de l'exigence.
|
404 |
Navigation visuelle pour l'atterrissage planétaire de précision indépendante du reliefDelaune, J. 04 July 2013 (has links) (PDF)
Cette thèse présente Lion, un système de navigation utilisant des informations visuelles et inertielles pour l'atterrissage planétaire de précision. Lion est conçu pour voler au-dessus de n'importe quel type de terrain, plat ou accidenté, et ne fait pas d'hypothèse sur sa topographie. Faire un atterrir un véhicule d'exploration planétaire autonome à moins de 100 mètres d'un objectif cartographié est un défi pour la navigation. Les approches basées vision tentent d'apparrier des détails 2D détectés dans une image avec des amers 3D cartographiés pour atteindre la précision requise. Lion utilise de façon serrée des mesures venant d'un nouvel algorithme d'appariement imagecarte afin de mettre à jour l'état d'un filtre de Kalman étendu intégrant des données inertielles. Le traitement d'image utilise les prédictions d'état et de covariance du filtre dans le but de déterminer les régions et échelles d'extraction dans l'image où trouver des amers non-ambigus. Le traitement local par amer de l'échelle image permet d'améliorer de façon significative la répétabilité de leur détection entre l'image de descente et l'image orbitale de référence. Nous avons également conçu un banc d'essai matériel appelé Visilab pour évaluer Lion dans des conditions représentatives d'une mission lunaire. L'observabilité des performances de navigation absolue dans Visilab est évaluée à l'aide d'un nouveau modèle d'erreur. Les performances du systèmes sont évaluées aux altitudes clés de la descente, en terme de précision de navigation et robustesse au changement de capteurs ou d'illumination, inclinaison de la caméra de descente, et sur différents types de relief. Lion converge jusqu'à une erreur de 4 mètres de moyenne et 47 mètres de dispersion 3 RMS à 3 kilomètres d'altitude à l'échelle.
|
405 |
Leap segmentation in mobile image and video analysisForsthoefel, Dana 13 January 2014 (has links)
As demand for real-time image processing increases, the need to improve the efficiency of image processing systems is growing. The process of image segmentation is often used in preprocessing stages of computer vision systems to reduce image data and increase processing efficiency. This dissertation introduces a novel image segmentation approach known as leap segmentation, which applies a flexible definition of adjacency to allow groupings of pixels into segments which need not be spatially contiguous and thus can more accurately correspond to large surfaces in the scene. Experiments show that leap segmentation correctly preserves an average of 20% more original scene pixels than traditional approaches, while using the same number of segments, and significantly improves execution performance (executing 10x - 15x faster than leading approaches). Further, leap segmentation is shown to improve the efficiency of a high-level vision application for scene layout analysis within 3D scene reconstruction.
The benefits of applying image segmentation in preprocessing are not limited to single-frame image processing. Segmentation is also often applied in the preprocessing stages of video analysis applications. In the second contribution of this dissertation, the fast, single-frame leap segmentation approach is extended into the temporal domain to develop a highly-efficient method for multiple-frame segmentation, called video leap segmentation. This approach is evaluated for use on mobile platforms where processing speed is critical using moving-camera traffic sequences captured on busy, multi-lane highways. Video leap segmentation accurately tracks segments across temporal bounds, maintaining temporal coherence between the input sequence frames. It is shown that video leap segmentation can be applied with high accuracy to the task of salient segment transformation detection for alerting drivers to important scene changes that may affect future steering decisions.
Finally, while research efforts in the field of image segmentation have often recognized the need for efficient implementations for real-time processing, many of today’s leading image segmentation approaches exhibit processing times which exceed their camera frame periods, making them infeasible for use in real-time applications. The third research contribution of this dissertation focuses on developing fast implementations of the single-frame leap segmentation approach for use on both single-core and multi-core platforms as well as on both high-performance and resource-constrained systems. While the design of leap segmentation lends itself to efficient implementations, the efficiency achieved by this algorithm, as in any algorithm, is can be improved with careful implementation optimizations. The leap segmentation approach is analyzed in detail and highly optimized implementations of the approach are presented with in-depth studies, ranging from storage considerations to realizing parallel processing potential. The final implementations of leap segmentation for both serial and parallel platforms are shown to achieve real-time frame rates even when processing very high resolution input images.
Leap segmentation’s accuracy and speed make it a highly competitive alternative to today’s leading segmentation approaches for modern, real-time computer vision systems.
|
406 |
Implementation and analysis of a virtual platform based on an embedded system / Implementation och analys av en virtuell plattform baserat på ett inbyggt systemSandstedt, Adam January 2014 (has links)
The complexity among embedded systems has increased dramatically in recent years. During the same time has the capacity of the hardware grown to astonishing levels. These factors have contributed to that software has taken a leading role and time-consuming role in embedded system development.Compared with regular software development, embedded development is often more restrained by factors such as hardware performance and testing capability. A solution to some of these problem has been proposed and that is a concept called virtual platforms. By emulating the hardware in a software environment, it is possible to avoid some of the problems associated with embedded software development. For example is it possible to execute a system faster than in reality and to provide a more controllable testing environment. This thesis presents a case study of an application specific virtual platform. The platform is based on already existing embedded system that is located in an industrial control system. The virtual platform is able to execute unmodified application code at a speed twice of the real system, without causing any software faults. The simulation can also be simulated at even higher speed if some accuracy losses are regarded as acceptable.The thesis presents some tools and methods that can be used to model hardware on a functional level in an software environment. The thesis also investigates the accuracy of the virtual platform by comparing it with measurements from the physical system. In this case are the measurements mainly focused of the data transactions in a controller area network bus (CAN).
|
407 |
Design and development of an automated regression test suite for UEFISaadat, Huzaifa 20 January 2015 (has links) (PDF)
Unified Extensible Firmware Interface (UEFI) is an industry standard for implementing the basic firmware in the computers. This standard replaces BIOS. A huge amount of C code has been written for the implementation of UEFI. Yet there has been a very little focus on testing UEFI code. The thesis shows how the industry can perform a meaningful testing of UEFI. Spanning the test coverage with the help of test tools over all UEFI phases is a key objective. Moreover, techniques such as Test Driven Development and source code analysis are explained in terms of UEFI to make sure the bugs are minimized in the first place. The results show that the usage of test and analysis tools point to a large number of issues. Some of these issues can be fixed at a very early stage in the Software Development Life Cycle. For this reason the developers and testers should be convinced that they need to focus on testing UEFI from a software perspective.
|
408 |
From Models to Code and Back : A Round-trip Approach for Model-driven Engineering of Embedded SystemsCiccozzi, Federico January 2014 (has links)
The complexity of modern systems is continuously growing, thus demanding novel powerful development approaches.In this direction, model-driven and component-based software engineering have reached the status of promising paradigms for the development of complex systems. Moreover, in the embedded domain, their combination is believed to be helpful in handling the ever-increasing complexity of such systems.However, in order for them and their combination to definitively break through at industrial level, code generated from models through model transformations should preserve system properties modelled at design level. This research work focuses on aiding the preservation of system properties throughout the entire development process across different abstraction levels. Towards this goal, we provide the possibility of analysing and preserving system properties through a development chain constituted of three steps: (i) generation of code from system models, (ii) execution and analysis of generated code, and (iii) back-propagation of analysis results to system models.With the introduction of steps (ii) and (iii), properties that are hard to predict at modelling level are compared with runtime values and this consequently allows the developer to work exclusively at modelling level thus focusing on optimising system models with the help of those values. / Denna doktorsavhandling presenterar nya och förbättrade tekniker för modelldriven och komponentbaserad utveckling av programvara. Syftet är att bevara systemegenskaper, som specificerats i modeller, genom de olika stadierna av utvecklingen och när modeller översätts mellan olika abstraktionsnivåer och till kod. Vi introducerar möjligheter att studera och bevara systemets egenskaper genom att skapa en kedja i tre steg som: (i) genererar kod från systemmodellen, (ii) exekverar och analyserar den genererade koden och (iii) slutligen återkopplar analysvärden till systemmodellen. Introduktionen av steg (ii) och (iii) gör det möjligt att genomföra en detaljerad analys av egenskaper som är svåra, eller till och med omöjliga, att studera med hjälp av endast systemmodeller. Fördelen med det här tillvägagångssättet är att det förenklar för utvecklaren som slipper arbeta direkt med kod för att ändra systemegenskaper. Istället kan utvecklaren arbeta helt och hållet med modeller och fokusera på optimering av systemmodeller med hjälp av analysvärden från testkörningar av systemet. Vi är övertygade om att denna typ av teknik är nödvändig att utveckla för att stödja modelldriven utveckling av programvara eftersom dagens tekniker inte möjliggör för systemutvecklare att specificera, analysera och optimera systemegenskaper på modellnivå. / La continua crescita in complessitá dei sistemi software moderni porta alla necessitá di definire nuovi e piú efficaci approcci di sviluppo. In questa direzione, metodi basati su modelli (model-driven engineering) e componenti (component-based software engineering) sono stati riconosciuti come promettenti nuove alternative per lo sviluppo di sistemi complessi. Inoltre l'interazione tra loro é ritenuta particolarmente vantaggiosa nella gestione nello sviluppo di sistemi integrati. Affinché questi approcci, cosí come la loro interazione, possano definitivamente prendere piede in campo industriale, il codice generato dai modelli tramite apposite transformazioni deve essere in grado di preservare le proprietá di sistema, sia funzionali che extra-funzionali, definite nei modelli. Il lavoro di ricerca presentato in questa tesi di dottorato si focalizza sul preservamento delle proprietá di sistema nell'intero processo di sviluppo e attraverso i diversi livelli di astrazione. Il risultato principale é rappresentato da un approccio automatico di round-trip engineering in grado di sostenere il preservamento delle proprietá di sistema attraverso: 1) generazione automatica di codice, 2) monitoraggio e analisi dell'esecuzione del codice generate su piattaforme specifiche, e 3) offrendo la possibilitá di propagare verticalmente i risultati da runtime al livello di modellazione. In questo modo, quelle proprietá che possono essere stimate staticamente solo in maniera approssimativa, vengono valutate in rapporto ai valori ottenuti a runtime. Ció permette di ottimizzare il sistema a livello di design attraverso i modelli, piuttosto che manualmente a livello di codice, per assicurare il preservamento degli proprietá di sistema d'interesse.
|
409 |
Microarchitecture and FPGA Implementation of the Multi-level Computing ArchitectureCapalija, Davor 30 July 2008 (has links)
We design the microarchitecture of the Multi-Level Computing Architecture (MLCA),
focusing on its Control Processor (CP). The design of the microarchitecture of the CP
faces us with both opportunities and challenges that stem from the coarse granularity of
the tasks and the large number of inputs and outputs for each task instruction. Thus,
we explore changes to standard superscalar microarchitectural techniques. We design
the entire CP microarchitecture and implement it on an FPGA using SystemVerilog.
We synthesize and evaluate the MLCA system based on a 4-processor shared-memory
multiprocessor. The performance of realistic applications shows scalable speedups that
are comparable to that of simulation. We believe that our implementation achieves low
complexity in terms of FPGA resource usage and operating frequency. In addition, we
argue that our design methodology allows the scalability of the CP as the entire system
grows.
|
410 |
Microarchitecture and FPGA Implementation of the Multi-level Computing ArchitectureCapalija, Davor 30 July 2008 (has links)
We design the microarchitecture of the Multi-Level Computing Architecture (MLCA),
focusing on its Control Processor (CP). The design of the microarchitecture of the CP
faces us with both opportunities and challenges that stem from the coarse granularity of
the tasks and the large number of inputs and outputs for each task instruction. Thus,
we explore changes to standard superscalar microarchitectural techniques. We design
the entire CP microarchitecture and implement it on an FPGA using SystemVerilog.
We synthesize and evaluate the MLCA system based on a 4-processor shared-memory
multiprocessor. The performance of realistic applications shows scalable speedups that
are comparable to that of simulation. We believe that our implementation achieves low
complexity in terms of FPGA resource usage and operating frequency. In addition, we
argue that our design methodology allows the scalability of the CP as the entire system
grows.
|
Page generated in 0.0634 seconds