591 |
Assistance au raffinement dans la conception des systèmes embarqués / Assisting formal refinement and verification in embedded system designMokrani, Hocine 10 June 2014 (has links)
La dernière décennie, la complexité des technologies embarqués a explosé et les flots de conception industrielle habituels ne suffisent plus pour proposer des produits fiables en respectant les exigences du marché. Ainsi, le développement de nouvelles méthodologies de conception est devenu un besoin impératif. La thèse vise l'amélioration des méthodologies de conception des systèmes embarqués. En proposant une approche de conception par niveaux d’abstraction, la nouvelle approche permet de guider et d’assister les concepteurs dans les étapes de conception, précisément de raffiner les composants de communication. Elle offre des garanties de préservation des propriétés fonctionnelles le long du flot de conception. La méthode proposée permet de raisonner sur les différents niveaux de description d'un système en exploitant des techniques de preuve de propriétés associées aux raffinement formel. / In the last decade, the complexity of embedded systems has exploded and the usual industrial design flows do not suffice any more to propose reliable products while respecting time to market constrain. Thus, developing new design methodologies has become an imperative. The thesis aims at the improvement of the methodologies of conception of the embedded systems. It proposes a method for assisting the process of refinement along the design flow. The proposed approach splits the design flow into multiple-levels, in order to guide the designer in the design process, from the most abstract model down to a synthesizable model. Furthermore, by using formal techniques the method allows to check the preservation of functional correctness along the design flow.
|
592 |
Localisation par vision multi-spectrale : Application aux systèmes embarqués / Multi-spectral vision localisation : An embedded systems applicationGonzalez, Aurelien 08 July 2013 (has links)
La problématique SLAM (Simultaneous Localization and Mapping) est un thème largement étudié au LAAS depuis plusieurs années. L'application visée concerne le développement d'un système d'aide au roulage sur aéroport des avions de ligne, ce système devant être opérationnel quelques soient les conditions météorologiques et de luminosité (projet SART financé par la DGE en partenariat avec principalement FLIR Systems, Latécoère et Thales).Lors de conditions de visibilité difficile (faible luminosité, brouillard, pluie...), une seule caméra traditionnelle n'est pas suffisante pour assurer la fonction de localisation. Dans un premier temps, on se propose d'étudier l'apport d'une caméra infrarouge thermique.Dans un deuxième temps, on s'intéressera à l'utilisation d'une centrale inertielle et d'un GPS dans l'algorithme de SLAM, la centrale aidant à la prédiction du mouvement, et le GPS à la correction des divergences éventuelles. Enfin, on intègrera dans ce même SLAM des pseudo-observations issues de l'appariement entre des segments extraits des images, et ces mêmes segments contenus dans une cartographie stockée dans une base de données. L'ensemble des observations et pseudo-observations a pour but de localiser le porteur à un mètre près.Les algorithmes devant être portés sur un FPGA muni d'un processeur de faible puissance par rapport aux PC standard (400 MHz), un co-design devra donc être effectué entre les éléments logiques du FPGA réalisant le traitement d'images à la volée et le processeur embarquant le filtre de Kalman étendu (EKF) pour le SLAM, de manière à garantir une application temps-réel à 30 Hz. Ces algorithmes spécialement développés pour le co-design et les systèmes embarqués avioniques seront testés sur la plate-forme robotique du LAAS, puis portés sur différentes cartes de développement (Virtex 5, Raspberry, PandaBoard...) en vue de l'évaluation des performances / The SLAM (Simultaneous Localization and Mapping) problematic is widely studied from years at LAAS. The aimed application is the development of a helping rolling system for planes on airports. This system has to work under any visibility and weather conditions ("SART" project, funding by DGE, with FLIR Systems, Thalès and Latecoère).During some weather conditions (fog, rain, darkness), one only visible camera is not enough to complete this task of SLAM. Firstly, in this thesis, we will study what an infrared camera can bring to SLAM problematic, compared to a visible camera, particularly during hard visible conditions.Secondly, we will focus on using Inertial Measurement Unit (IMU) and GPS into SLAM algorithm, IMU helping on movement prediction, and GPS helping on SLAM correction step. Finally, we will fit in this SLAM algorithm pseudo-observations coming from matching between points retrieved from images, and lines coming from map database. The main objective of the whole system is to localize the vehicle at one meter.These algorithms aimed to work on a FPGA with a low-power processor (400MHz), a co-design between the hardware (processing images on the fly) and the software (embedding an Extended Kalman Filter (EKF) for the SLAM), has to be realized in order to guarantee a real-time application at 30 Hz. These algorithms will be experimented on LAAS robots, then embedded on different boards (Virtex 5, Raspberry Pi, PandaBoard...) for performances evaluation
|
593 |
Investigating Simultaneous Localization and Mapping for an Automated Guided VehicleManhed, Joar January 2019 (has links)
The aim of the thesis is to apply simultaneous localization and mapping (SLAM) to automated guided vehicles (AGVs) in a Robot Operating System (ROS) environment. Different sensor setups are used and evaluated. The SLAM applications used is the open-source solution Cartographer as well as Intel's own commercial SLAM in their T265 tracking camera. The different sensor setups are evaluated based on how well the localization will give the exact pose of the AGV in comparison to another positioning system acting as ground truth.
|
594 |
Compressed convolutional neural network for autonomous systemsPathak, Durvesh 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The word “Perception” seems to be intuitive and maybe the most straightforward
problem for the human brain because as a child we have been trained to classify
images, detect objects, but for computers, it can be a daunting task. Giving intuition
and reasoning to a computer which has mere capabilities to accept commands
and process those commands is a big challenge. However, recent leaps in hardware
development, sophisticated software frameworks, and mathematical techniques have
made it a little less daunting if not easy. There are various applications built around
to the concept of “Perception”. These applications require substantial computational
resources, expensive hardware, and some sophisticated software frameworks. Building
an application for perception for the embedded system is an entirely different
ballgame. Embedded system is a culmination of hardware, software and peripherals
developed for specific tasks with imposed constraints on memory and power.
Therefore, the applications developed should keep in mind the memory and power
constraints imposed due to the nature of these systems. Before 2012, the problems related to “Perception” such as classification, object
detection were solved using algorithms with manually engineered features. However,
in recent years, instead of manually engineering the features, these features are learned
through learning algorithms. The game-changing architecture of Convolution Neural
Networks proposed in 2012 by Alex K [1], provided a tremendous momentum in the
direction of pushing Neural networks for perception. This thesis is an attempt to
develop a convolution neural network architecture for embedded systems, i.e. an architecture that has a small model size and competitive accuracy. Recreate state-of-the-art architectures using fire module’s concept to reduce the model size of the
architecture. The proposed compact models are feasible for deployment on embedded
devices such as the Bluebox 2.0. Furthermore, attempts are made to integrate the
compact Convolution Neural Network with object detection pipelines.
|
595 |
Implementace šifrovacích algoritmů v jazyku VHDL / Implementation of Encryption Algorithms in VHDL LanguageKožený, Petr January 2008 (has links)
This thesis deals with design and implementation of AES and DES encryption architectures for embedded systems. Architectures are implemented in VHDL language and design for FPGA technology. The proposed implementations are mapped on the Xilinx Spartan 3 technology. Both architectures are applied in simple ECB (Electronic Codebook) scheme with cache memories. A maximum throughput of design DES architecture 370 Mbps is achived with clock frequency of 104 MHz. The throughput of AES architecture at the maximum clock frequency of 118 MHz is 228 Mbps. Compared to software implementations for embedded systems, we achieve significantly higher throughput for both architectures.
|
596 |
Síťová architektura a propojování vestavěných systémů / Network Architecture and Interconnection of Embedded SystemsTrchalík, Roman January 2013 (has links)
The thesis focuses on the architecture of embedded systems. It summarizes the current state of accepted standards from IEEE 1451 family, which deals with creating an environment for the sensors and their involvement in various networks. These standards describe the open, network-independent communication architecture for a sensor-based system. One of the main outcomes of this work is the architectures presented as case studies, which can be used as design patters for embedded applications. They are demonstrated on ZigBee technology suitable mainly for small devices with very low power consumption. Based on these studies the new design of universal gateway was proposed. Its major advantage is that it allows interconnection of the endpoints based on different sensor network technologies. Additionally, the thesis deals with modifying the routing protocol of ZigBee network in order to reduce power consumption required to transmit one data packet.
|
597 |
Hierarchical Partition Based Design Approach for Security of CAN Bus Based Automobile Embedded SystemKalakota, Govardhan Reddy 02 November 2018 (has links)
No description available.
|
598 |
Automated Vehicle Electronic Control Unit (ECU) Sensor Location Using Feature-Vector Based ComparisonsButhker, Gregory S. 24 May 2019 (has links)
No description available.
|
599 |
International Summerschool Computer Science 2014: Proceedings of Summerschool 7.7. - 13.7.201418 July 2014 (has links)
Proceedings of International Summerschool Computer Science 2014
|
600 |
ROS-based implementation of a model car with a LiDAR and camera setupNises, Marcus January 2023 (has links)
The aim of this project is to implement a Radio Controlled (RC) car with a Light Detection and Ranging (LiDAR) sensor and a stereoscopic camera setup based on the Robot Operating System (ROS) to conduct Simultaneous Localization and Mapping (SLAM). The LiDAR sensor used is a 2D LiDAR, RPlidar A1, and the stereoscopic camera setup is made of two monocular cameras, Raspberry Pi Camera v2. The sensors were mounted on the RC car and connected using two Raspberry Pi microcomputers. The 2D LiDAR sensor was used for two-dimensional mapping and the stereo vision from the camera setup for three-dimensional mapping. RC car movement information, odometry, necessary for SLAM was derived using either the LiDAR data or the data from the stereoscopic camera setup. Two means of SLAM were implemented both separately and together for mapping an office space. The SLAM algorithms adopted the Real Time Appearance Based Mapping (RTAB-map) package in the open-source ROS. The results of the mapping indicated that the RPlidar A1 was able to provide a precise mapping, but showed difficulty when mapping in large circular patterns as the odometry drift resulted in the mismatch of the current mapping with the earlier mapping of the same positions and secondly in localization when turning quickly. The camera setup derived more information about surrounding and showed more robust odometry. However, the setup performed poorly for the mapping of visual loop closures, i.e., the current mapping did not match the earlier mapping of earlier visited positions.
|
Page generated in 0.0585 seconds