• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integrated Software Development Environment for a 32-bit / 16-bit Processor Family

Su, Chien-Chang 30 July 2007 (has links)
To the general purpose microprocessors, we often need to change microprocessors¡¦ hardware architecture because of customized purpose. But already existing application program is incompatible to the new hardware architecture, and increase the product¡¦s development period. In this thesis, we discuss the modification of two kinds of hardware architecture, include new instruction set extension and change the size of datapath to deal with specific application. To the former, our laboratory develop a 32-bit microprocessor SYS32-TM, increase MME instruction set to deal with multimedia application. The latter, based on Thumb instruction set , we develop 16-bit microprocessor SYS16-TM, we modify its¡¦ datapath from 32-bit to 16-bit, we will show how to let already existing application program can execute on the new hardware architecture. In SYS32-TM, we use the way of inline assembly to embedded MME instruction set in C source code, we have to modify the assembler, define and parse the MME instruction set, so the assembler can recognize it. In SYS16-TM, we have sign extension and address offset problems, we have to modify the compiler backend¡¦s machine description to solve the sign extension and address offset instruction set behavior, and modify the library. To build SYS16-TM software environment, we have to set C Run Time Environment in Thumb mode, not support exchange between ARM mode and Thumb mode, and write the correct linker script, to set the program start address in 0x0000, to solve ARM¡¦s initial program start address in 0x8000. As a result, In SYS32-TM, we use assembler to identify the MME instruction set can embedded in existing C source code. In SYS16-TM, we execute the testbench include sorts, Hanoi, Fibonacci number etc, and use simulator to verify its¡¦ correctness.
2

Automatic Conversion of the Mathworks' Stateflow Models to C++

Hannis, Melissa Katherine 14 December 2018 (has links)
Finite state machines are often used for modeling the decision logic for simulated systems. MathWorks’ Stateflow has a graphical user interface that allow users to model finite state machines. A Stateflow model can be added as a block to a Matlab/Simulink model and be executed seamlessly together. Stateflow blocks are developed as “charts” but they are natively stored as XML documents. This research explores the possibility of extracting the behavior of the finite state machines as defined in a Stateflow chart. This is done by parsing the corresponding XML document and reproducing this behavior in a C++ implementation that can be instantiated within a large, C++ based simulation system. Furthermore, the goal of this research is to develop a tool that will automatically generate an equivalent C++ representation, given an arbitrary Stateflow XML model. This research is performed in the context of developing highidelity powertrain simulations to be executed in High-Performance Computing environments.
3

Modèles et protocoles de cohérence de données, décision et optimisation à la compilation pour des architectures massivement parallèles. / Data Consistency Models and Protocols, Decision and Optimization at Compile Time for Massively Parallel Architectures

Dahmani, Safae 14 December 2015 (has links)
Le développement des systèmes massivement parallèles de type manycores permet d'obtenir une très grande puissance de calcul à bas coût énergétique. Cependant, l'exploitation des performances de ces architectures dépend de l'efficacité de programmation des applications. Parmi les différents paradigmes de programmation existants, celui à mémoire partagée est caractérisé par une approche intuitive dans laquelle tous les acteurs disposent d'un accès à un espace d'adressage global. Ce modèle repose sur l'efficacité du système à gérer les accès aux données partagées. Le système définit les règles de gestion des synchronisations et de stockage de données qui sont prises en charge par les protocoles de cohérence. Dans le cadre de cette thèse nous avons montré qu'il n'y a pas un unique protocole adapté aux différents contextes d'application et d'exécution. Nous considérons que le choix d'un protocole adapté doit prendre en compte les caractéristiques de l'application ainsi que des objectifs donnés pour une exécution. Nous nous intéressons dans ces travaux de thèse au choix des protocoles de cohérence en vue d'améliorer les performances du système. Nous proposons une plate-forme de compilation pour le choix et le paramétrage d'une combinaison de protocoles de cohérence pour une même application. Cette plate- forme est constituée de plusieurs briques. La principale brique développée dans cette thèse offre un moteur d'optimisation pour la configuration des protocoles de cohérence. Le moteur d'optimisation, inspiré d'une approche évolutionniste multi-objectifs (i.e. Fast Pareto Genetic Algorithm), permet d'instancier les protocoles de cohérence affectés à une application. L'avantage de cette technique est un coût de configuration faible permettant d'adopter une granularité très fine de gestion de la cohérence, qui peut aller jusqu'à associer un protocole par accès. La prise de décision sur les protocoles adaptés à une application est orientée par le mode de performance choisi par l'utilisateur (par exemple, l'économie d'énergie). Le modèle de décision proposé est basé sur la caractérisation des accès aux données partagées selon différentes métriques (par exemple: la fréquence d'accès, les motifs d'accès à la mémoire, etc). Les travaux de thèse traitent également des techniques de gestion de données dans la mémoire sur puce. Nous proposons deux protocoles basés sur le principe de coopération entre les caches répartis du système: Un protocole de glissement des données ainsi qu'un protocole inspiré du modèle physique du masse-ressort. / Manycores architectures consist of hundreds to thousands of embedded cores, distributed memories and a dedicated network on a single chip. In this context, and because of the scale of the processor, providing a shared memory system has to rely on efficient hardware and software mechanisms and data consistency protocols. Numerous works explored consistency mechanisms designed for highly parallel architectures. They lead to the conclusion that there won't exist one protocol that fits to all applications and hardware contexts. In order to deal with consistency issues for this kind of architectures, we propose in this work a multi-protocol compilation toolchain, in which shared data of the application can be managed by different protocols. Protocols are chosen and configured at compile time, following the application behaviour and the targeted architecture specifications. The application behaviour is characterized with a static analysis process that helps to guide the protocols assignment to each data access. The platform offers a protocol library where each protocol is characterized by one or more parameters. The range of possible values of each parameter depends on some constraints mainly related to the targeted platform. The protocols configuration relies on a genetic-based engine that allows to instantiate each protocol with appropriate parameters values according to multiple performance objectives. In order to evaluate the quality of each proposed solution, we use different evaluation models. We first use a traffic analytical model which gives some NoC communication statistics but no timing information. Therefore, we propose two cycle- based evaluation models that provide more accurate performance metrics while taking into account contention effect due to the consistency protocols communications.We also propose a cooperative cache consistency protocol improving the cache miss rate by sliding data to less stressed neighbours. An extension of this protocol is proposed in order to dynamically define the sliding radius assigned to each data migration. This extension is based on the mass-spring physical model. Experimental validation of different contributions uses the sliding based protocols versus a four-state directory-based protocol.
4

Methods and Algorithms for Efficient Programming of FPGA-based Heterogeneous Systems for Object Detection

Kalms, Lester 14 March 2023 (has links)
Nowadays, there is a high demand for computer vision applications in numerous application areas, such as autonomous driving or unmanned aerial vehicles. However, the application areas and scenarios are becoming increasingly complex, and their data requirements are growing. To meet these requirements, it needs increasingly powerful computing systems. FPGA-based heterogeneous systems offer an excellent solution in terms of energy efficiency, flexibility, and performance, especially in the field of computer vision. Due to complex applications and the use of FPGAs in combination with other architectures, efficient programming is becoming increasingly difficult. Thus, developers need a comprehensive framework with efficient automation, good usability, reasonable abstraction, and seamless integration of tools. It should provide an easy entry point, and reduce the effort to learn new concepts, programming languages and tools. Additionally, it needs optimized libraries for the user to focus on developing applications without getting involved with the underlying details. These should be well integrated, easy to use, and cover a wide range of possible use cases. The framework needs efficient algorithms to execute applications on heterogeneous architectures with maximum performance. These algorithms should distribute applications across various nodes with low fragmentation and communication overhead and find a near-optimal solution in a reasonable amount of time. This thesis addresses the research problem of an efficient implementation of object detection applications, their distribution across FPGA-based heterogeneous systems, and methods for automation and integration using toolchains. Within this, the three contributions are the HiFlipVX object detection library, the DECISION framework, and the APARMAP application distribution algorithm. HiFlipVX is an open-source HLS-based FPGA library optimized for performance and resource efficiency. It contains 66 highly parameterizable computer vision functions including neural networks, ideally for design space exploration. It extends the OpenVX standard for feature extraction, which is challenging due to unknown element size at design time. All functions are streaming capable to achieve maximum performance by increasing parallelism and reducing off-chip memory access. It does not require external or vendor libraries, which eases project integration, device coverage, and vendor portability, as shown for Intel. The library consumed on average 0.39% FFs and 0.32% LUTs for a set of image processing functions compared to a vendor library. A HiFlipVX implementation of the AKAZE feature detector computes between 3.56 and 4.13 times more pixels per second than the related work, while its resource consumption is comparable to optimized VHDL designs. Its neural network extension achieved a speedup of 3.23 for an AlexNet layer in comparison to a related work, while consuming 73% less on-chip memory. Furthermore, this thesis proposes an improved feature extraction implementation that achieves a repeatability of 72.57% when weighting complex cases, while the next best algorithm only achieves 62.99 %. DECISION is a framework consisting of two toolchains for the efficient programming of FPGA-based heterogeneous systems. Both integrate HiFlipVX and use a joint OpenVXbased frontend to implement computer vision applications. It abstracts the underlying hardware and algorithm details while covering a wide range of architectures and applications. The first toolchain targets x86-based systems consisting of CPUs, GPUs, and FPGAs using OpenCL (Open Computing Language). To create a heterogeneous schedule, it considers device profiles, kernel profiles and estimates, and FPGA dataflow characteristics. It manages synchronization, memory transfers and data coherence at design time. It creates a runtime optimized program which excels by its high parallelism and a low overhead. Additionally, this thesis looks at the integration of OpenCL-based libraries, automatic OpenCL kernel generation, and OpenCL kernel optimization and comparison for different architectures. The second toolchain creates an application specific and adaptive NoC-based architecture. The streaming-optimized architecture enables the reusability of vision functions by multiple applications to improve the resource efficiency while maintaining high performance. For a set of example applications, the resource consumption was more than halved, while its overhead was only 0.015% in terms of performance. APARMAP is an application distribution algorithm for partition-based and mesh-like FPGA topologies. It uses a NoC (Network-on-Chip) as communication infrastructure to connect reconfigurable regions and generate an application-specific hardware architecture. The algorithm uses load balancing techniques to find reasonable solutions within a predictable and scalable amount of time. It optimizes solutions using various heuristics, such as Simulated Annealing and Tabu Search. It uses a multithreaded grid-based approach to prevent threads from calculating the same solution and getting stuck in local minimums. Its constraints and objectives are the FPGA resource utilization, NoC bandwidth consumption, NoC hop count, and execution time of the proposed algorithm. The evaluation showed that the algorithm can deal with heterogeneous and irregular host graph topologies. The algorithm showed a good scalability in terms of computation time for an increasing number of nodes and partitions. It was able to achieve an optimal placement for a set of example graphs up to a size of 196 nodes on host graphs of up to 49 partitions. For a real application with 271 nodes and 441 edges, it was able to achieve a distribution with low resource fragmentation in an average time of 149 ms.
5

Modellgestützter Entwurf von Feldgeräteapplikationen

Mätzler, Stefan 26 July 2021 (has links)
Die Entwicklung von Feldgeräten ist ein äußerst komplexer Vorgang, welcher auf vielen Vorrausetzungen aufsetzt, diverse Anforderungen und Randbedingungen mitbringt und bisher wenig beachtet und veröffentlicht wurde. Angesichts der fortschreitenden Digitalisierung drängen immer mehr Anbieter auf den Automatisierungsmarkt. So sind aktuell zunehmend Technologien und Ansätze aus dem Umfeld des Internet of Things im Automatisierungsbereich zu finden. Diese Ansätze reichen von Sensoren ohne die in der Industrie üblichen Beschreibungen bis hin zu Marktplätzen, auf denen Integratoren und Anwender Softwareteile für Anlagen kaufen können. Für die neuen Anbieter, die häufig nicht aus dem klassischen Automatisierungsgeschäft kommen, sind die bisher bestehenden Modelle, Funktionalitäten, Profile und Beschreibungsmittel nicht immer leicht zu verwenden. So entstehen disruptive Lösungen auf Basis neu definierter Spezifikationen und Modelle. Trotz dieser Disruptivität sollte es das Ziel sein, die bewährten Automatisierungsfunktionen nicht neu zu erfinden, sondern diese effektiv und effizient in Abhängigkeit der Anforderungen auf unterschiedlichen Plattformen zu verwenden. Dies schließt ihre flexible Verteilung auf heterogene vernetzte Ressourcen explizit ein. Dabei können die Plattformen sowohl klassische Feldgeräte und Steuerungen sein, als auch normale Desktop-PCs und IoT-Knoten. Ziel dieser Arbeit ist es, eine Werkzeugkette für den modellbasierten Entwurf von Feldgeräteapplikationen auf Basis von Profilen und damit für den erweiterten Entwurf von verteilten Anlagenapplikationen zu entwickeln. Dabei müssen die verschiedenen Beschreibungsmöglichkeiten evaluiert werden, um diese mit detaillierten Parameter- und Prozessdatenbeschreibungen zu erweitern. Außerdem sollen modulare Konzepte genutzt und Vorbereitungen für die Verwendung von Semantik im Entwurfsprozess getroffen werden. In Bezug auf den Geräteengineeringprozess soll der Anteil des automatisierten Geräteengineerings erweitert werden. Dies soll zu einer Flexibilisierung der Geräteentwicklung führen, in der die Verschaltung der funktionalen Elemente beim Endkunden erfolgt. Auch das Deployment von eigenen funktionalen Elementen auf die Geräte der Hersteller soll durch den Endkunden möglich werden. Dabei wird auch eine automatisierte Erstellung von Gerätebeschreibungen benötigt. Alle diese Erweiterungen ermöglichen dann den letzten großen Schritt zu einer verteilten Applikation über heterogene Infrastrukturen. Dabei sind die funktionalen Elemente nicht nur durch die Gerätehersteller verteilbar, sondern diese können auch auf verschiedenen Plattformen unterschiedlicher Gerätehersteller verwendet werden. Damit einher geht die für aktuelle Entwicklungen wie Industrie 4.0 benötigte geräteunabhängige Definition von Funktionalität. Alle im Engineering entstandenen Informationen können dabei auf den unterschiedlichen Ebenen der Automatisierungspyramide und während des Lebenszyklus weiterverwendet werden. Eine Integration diverser Gerätefamilien außerhalb der Automatisierungstechnik wie z. B. IoT-Geräte und IT-Geräte ist damit vorstellbar. Nach einer Analyse der relevanten Techniken, Technologien, Konzepte, Methoden und Spezifikationen wurde eine Werkzeugkette für den modellgestützten Entwurf von Feldgeräten entwickelt und die benötigten Werkzeugteile und Erweiterungen an bestehenden Beschreibungen diskutiert. Dies Konzept wurde dann auf den verteilten Entwurf auf heterogener Hardware und heterogenen Plattformen erweitert, bevor beide Konzepte prototypisch umgesetzt und evaluiert wurden. Die Evaluation erfolgt an einem zweigeteilten Szenario aus der Sicht eines Geräteherstellers und eines Integrators. Die entwickelte Lösung integriert Ansätze aus dem Kontext von Industrie 4.0 und IoT. Sie trägt zu einer vereinfachten und effizienteren Automatisierung des Engineerings bei. Dabei können Profile als Baukasten für die Funktionalität der Feldgeräte und Anlagenapplikationen verwendet werden. Bestehende Beschränkungen im Engineering werden somit abgeschwächt, so dass eine Verteilung der Funktionalität auf heterogene Hardware und heterogene Plattformen möglich wird und damit zur Flexibilisierung der Automatisierungssysteme beiträgt. / The development of field devices is a very complex procedure. Many preconditions need to be met. Various requirements and constrains need to be addressed. Beside this, there are only a few publications on this topic. Due to the ongoing digitalization, more and more solution providers are entering the market of the industrial automation. Technologies and approaches from the context of the Internet of Things are being used more and more in the automation domain. These approaches range from sensors without the typical descriptions from industry up to marketplaces where integrators and users can buy software components for plants. For new suppliers, who often do not come from the classical automation business, the already existing models, functionalities, profiles, and descriptions are not always easy to use. This results in disruptive solutions based on newly defined specifications and models. Despite this disruptiveness, the aim should be to prevent reinventing the proven automation functions, and to use them effectively, and efficiently on different platforms depending on the requirements. This explicitly includes the flexible distribution of the automation functions to heterogeneous networked resources. The platforms can be classical field devices and controllers, as well as normal desktop PCs and IoT nodes. The aim of this thesis is to develop a toolchain for the model-based design of field device applications based on profiles, and thus also suitable for the extended design of distributed plant applications. Therefore, different description methods are evaluated in order to enrich them with detailed descriptions of parameters and process data. Furthermore, c oncepts of modularity will also be used and preparations will be made for the use of semantics in the design process. With regard to the device engineering process, the share of automated device engineering will be increased. This leads to a flexibilisation of the device development, allowing the customer to perform the networking of the functional elements by himself. The customer should also be able to deploy his own functional elements to the manufacturers' devices. This requires an automated creation of device descriptions. Finally, all these extensions will enable a major step towards using a distributed application over heterogeneous infrastructures. Thus, the functional elements can not only be distributed by equipment manufacturers, but also be distributed on different platforms of different equipment manufacturers. This is accompanied by the device-independent definition of functionality required for current developments such as Industry 4.0. All information created during engineering can be used at different levels of the automation pyramid and throughout the life cycle. An integration of various device families from outside of Automation Technology, such as IoT devices and IT devices, is thus conceivable. After an analysis of the relevant techniques, technologies, concepts, methods, and specifications a toolchain for the model-based design of field devices was developed and the required tool parts, and extensions to existing descriptions were discussed. This concept was then extended to the distributed design on heterogeneous hardware and heterogeneous platforms. Finally, both concepts were prototypically implemented and evaluated. The evaluation is based on a two-part scenario from both the perspective of a device manufacturer, and the one of an integrator. The developed solution integrates approaches from the context of Industry 4.0 and IoT. It contributes to a simplified, and more efficient automation of engineering. Within this context, profiles can be used as building blocks for the functionality of field devices, and plant applications. Existing limitations in engineering are thus reduced, so that a distribution of functionality across heterogeneous hardware and heterogeneous platforms becomes possible and contributing to the flexibility of automation systems.
6

Making a common graphical language for the validation of linked data. / Skapandet av ett generiskt grafiskt språk för validering av länkad data.

Echegaray, Daniel January 2017 (has links)
A variety of embedded systems is used within the design and the construction of trucks within Scania. Because of their heterogeneity and complexity, such systems require the use of many software tools to support embedded systems development. These tools need to form a well-integrated and effective development environment, in order to ensure that product data is consistent and correct across the developing organisation. A prototype is under development which adapts a linked data approach for data integration, more specifically this prototype adapt the Open Services for Lifecycle Collaboration(OSLC) specification for data-integration. The prototype allows users, to design OSLC-interfaces between product management tools and OSLC-links between their data. The user is further allowed to apply constraints on the data conforming to the OSLC validation language Resource Shapes(ReSh). The problem lies in the prototype conforming only to the language of Resource Shapes whose constraints are often too coarse-grained for Scania’s needs, and that there exists no standardised language for the validation of linked data. Thus, for framing this study two research questions was formulated (1) How can a common graphical language be created for supporting all validation technologies of RDF-data? and (2) How can this graphical language support the automatic generation of RDF-graphs? A case study is conducted where the specific case consists of a software tool named SESAMM-tool at Scania. The case study included a constraint language comparison and a prototype extension. Furthermore, a design science research strategy is followed, where an effective artefact was searched for answering the stated research questions. Design science promotes an iterative process including implementation and evaluation. Data has been empirically collected in an iterative development process and evaluated using the methods of informed argument and controlled experiment, respectively, for the constraint language comparison and the extension of the prototype. Two constraint languages were investigated Shapes Constraint Language (SHACL) and Shapes Expression (ShEx). The result of the constraint language comparison concluded SHACL as the constraint language with a larger domain of constraints having finer-grained constraints also with the possibility of defining new constraints. This was based on that SHACL constraints was measured to cover 89.5% of ShEx constraints and 67.8% for the converse. The SHACL and ShEx coverage on ReSh property constraints was measured to 75% and 50%. SHACL was recommended and chosen for extending the prototype. On extending the prototype abstract super classes was introduced into the underlying data model. Constraint language classes were stated as subclasses. SHACL was additionally stated as such a subclass. This design offered an increased code reuse within the prototype but gave rise to issues relating to the plug-in technologies that the prototype is based upon. The current solution still has the issue that properties of one constraint language may be added to classes of another constraint language. / En mängd olika inbyggda system används inom design och konstruktion av lastbilar inom Scania. På grund av deras heterogenitet och komplexitet kräver sådana system användningen av många mjukvaruverktyg för att stödja inbyggd systemutveckling. Dessa verktyg måste bilda en välintegrerad och effektiv utvecklingsmiljö för att säkerställa att produktdata är konsekventa och korrekta över utvecklingsorganisationen.En prototyp håller på att utvecklas som anpassar en länkad datainriktning för dataintegration, mer specifikt anpassar denna prototyp en dataintegration specifikation utvecklad av Open Services for Lifecycle Collaboration(OSLC). Prototypen tillåter användare att utforma OSLC-gränssnitt mellan produkthanteringsverktyg och OSLC-länkar mellan deras data. Användaren får vidare tillämpa begränsningar på de data som överensstämmer med OSLC-valideringsspråket Resource Shapes. Problemet ligger i prototypen som endast överensstämmer med Resource Shapes, vars begränsningar ofta är för grova för Scanias behov och att det inte finns något standardiserat språk för validering av länkad data. Således, för att utforma denna studie formulerades två forskningsfrågor (1) Hur kan ett gemensamt grafiskt språk skapas för att stödja alla valideringsteknologier av RDF-data? och (2) Hur kan detta grafiska språk stödja Automatisk generering av RDF-grafer? En fallstudie genomförs där det specifika fallet består av ett mjukvaruverktyg som heter SESAMM-tool hos Scania. Fallstudien innehöll en jämförelse av valideringsspråk och vidareutveckling av prototypen. Vidare följs Design Science som forskningsstrategi där en effektiv artefakt sökts för att svara på de angivna forskningsfrågorna. Design Science främjar en iterativ process inklusive genomförande och utvärdering. Data har empiriskt samlats på ett iterativt sätt och utvärderats med hjälp av utvärderingsmetoderna informerat argument och kontrollerat experiment, för valideringsspråkjämförelsen och vidareutvecklingen av prototypen. Två valideringsspråk undersöktes Shapes Constraint Language (SHACL) och Shapes Expression (ShEx).Resultatet av valideringsspråksjämförelsen konkluderade SHACL som valideringsspråket med en större domän av begränsningar, mer finkorniga begränsningar och med möjligheten att definiera nya begränsningar. Detta var baserat på att SHACL-begränsningarna uppmättes täcka 89,5 % av ShEx-begränsningarna och 67,8 % för det omvända. SHACL- och ShEx-täckningen för Resource Shapes-egenskapsbegränsningar mättes till 75 % respektive 50 %. SHACL rekommenderades och valdes för att vidareutveckla prototypen.Vid vidareutveckling av prototypen infördes abstrakta superklasser i den underliggande datamodellen. Superklasserna tog i huvudsak rollen som tidigare klasser för valideringsspråk, som istället utgjordes som underklasser. SHACL anges som en sådan underklass. Denna design erbjöd hög kodåteranvändning inom prototypen men gav också upphov till problem som relaterade till plugin-teknologier som prototypen bygger på. Den nuvarande lösningen har fortfarande problemet att egenskaper hos ett valideringsspråk kan läggas till klasser av ett annat valideringsspråk.
7

Increasing efficiency in ECU function development for Battery Management Systems

Singh Rajput, Shivaram January 2016 (has links)
In the context of automotive industries today, the focus of ECU function development is always on finding the best possible combinations of control algorithms and parameter. The complex algorithms with broad implementation range requires optimal calibration of ECU parameters to achieve the desired behaviour during the drive cycle of the vehicle. With the growing function complexity of automotive E/E Systems, the traditional approaches of designing the automotive embedded systems are not suitable. In order to overcome the challenge of complexity, many of the leading automotive companies have formed a partnership in order to develop and establish an open industry standard for automotive E/E architecture called AUTOSAR. In this thesis, toolchain for ECU function development following AUTOSAR standard and an efficient measurement and calibration mechanism using XCP on CAN will be investigated and implemented. Two toolchains will be proposed in this thesis, describing their usage in different stages of ECU function development and in calibration. Both these toolchains will be tested to prove its working. / I området utveckling av funktionalitet på elektroniska styrsystem inom bilindustrin idag, ligger fokus på att finna den bästa kombinationen av reglermetoder och styrparametrar. Dessa avancerade system, med breda användningsområden, kräver bästa möjliga injustering av dess kalibrerbara parametrar, för att nå önskat beteende vid användning av fordonet. Det ökande omfånget av funktionskraven på styrsystemen, innebär att sedvanlig metodik för utveckling av dessa system inte är lämplig. För att kunna lösa dessa svårigheter, har de stora inom bilindustrin ingått ett samarbete, där de tillsammans skapat och utvecklar en industristandard för funktionsoch systemutveckling av styrsystem. Standarden kallas AUTOSAR. Denna rapport beskriver hur en kedja av utvecklingsverktyg som följer AUTOSAR-standarden kan användas, för att undersöka och använda en metod för systemövervakning och parameterkalibrering, genom användning av XCP över CAN.

Page generated in 0.0542 seconds