• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 12
  • 9
  • 9
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 107
  • 43
  • 43
  • 32
  • 29
  • 21
  • 21
  • 20
  • 20
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Exploração de sequências de otimização do compilador baseada em técnicas hibridas de mineração de dados complexos / Exploration of optimization sequences of the compiler based on hybrid techniques of complex data mining

Luiz Gustavo Almeida Martins 25 September 2015 (has links)
Devido ao grande número de otimizações fornecidas pelos compiladores modernos e à ampla possibilidade de ordenação dessas transformações, uma eficiente Exploração do Espaço de Projeto (DSE) se faz necessária para procurar a melhor sequência de otimização de uma determinada função ou fragmento de código. Como esta exploração é uma tarefa complexa e dispendiosa, apresentamos uma nova abordagem de DSE capaz de reduzir esse tempo de exploração e selecionar sequências de otimização que melhoraram o desempenho dos códigos transformados. Nossa abordagem utiliza um conjunto de funções de referência, para as quais uma representação simbólica do código (DNA) e a melhor sequência de otimização são conhecidas. O DSE de novas funções é baseado em uma abordagem de agrupamento aplicado sobre o código DNA que identifica similaridades entre funções. O agrupamento utiliza três técnicas para a mineração de dados: distância de compressão normalizada, algoritmo de reconstrução de árvores filogenéticas (Neighbor Joining) e identificação de grupos por ambiguidade. As otimizações das funções de referência identificadas como similares formam o espaço que é explorado para encontrar a melhor sequência para a nova função. O DSE pode utilizar o conjunto reduzido de otimizações de duas formas: como o espaço de projeto ou como a configuração inicial do algoritmo. Em ambos os casos, a adoção de uma pré-seleção baseada no agrupamento permite o uso de algoritmos de busca simples e rápidos. Os resultados experimentais revelam que a nova abordagem resulta numa redução significativa no tempo total de exploração, ao mesmo tempo que alcança um desempenho próximo ao obtido através de uma busca mais extensa e dispendiosa baseada em algoritmos genéticos. / Due to the large number of optimizations provided in modern compilers and to compiler optimization specific opportunities, a Design Space Exploration (DSE) is necessary to search for the best sequence of compiler optimizations for a given code fragment (e.g., function). As this exploration is a complex and time consuming task, we present new DSE strategies to reduce the exploration time and still select optimization sequences able to improve the performance of each function. The DSE is based on a clustering approach which groups functions with similarities and then explore the reduced search space provided by the optimizations previously suggested for the functions in each group. The identification of similarities between functions uses a data mining method which is applied to a symbolic representation of the source code. The DSE strategies uses the reduced optimizations set identified by clustering in two ways: as the design space or as the initial configuration of the algorithm. In both ways, the adoption of a pre-selection based on clustering allows the use of simple and fast DSE algorithms. Several experiments for evaluating the effectiveness of the proposed approach address the exploration of compiler optimization sequences. Besides, we investigate the impact of each technique or component employed in the selection process. Experimental results reveal that the use of our new clustering-based DSE approach achieved a significant reduction on the total exploration time of the search space at the same time that obtained performance speedups close to a traditional genetic algorithmbased approach.
122

Architectures parallèles reconfigurables pour le traitement vidéo temps-réel / Parallel reconfigurable hardware architectures for video processing applications

Ali, Karim Mohamed Abedallah 08 February 2018 (has links)
Les applications vidéo embarquées sont de plus en plus intégrées dans des systèmes de transport intelligents tels que les véhicules autonomes. De nombreux défis sont rencontrés par les concepteurs de ces applications, parmi lesquels : le développement des algorithmes complexes, la vérification et le test des différentes contraintes fonctionnelles et non-fonctionnelles, la nécessité d’automatiser le processus de conception pour augmenter la productivité, la conception d’une architecture matérielle adéquate pour exploiter le parallélisme inhérent et pour satisfaire la contrainte temps-réel, réduire la puissance consommée pour prolonger la durée de fonctionnement avant de recharger le véhicule, etc. Dans ce travail de thèse, nous avons utilisé les technologies FPGAs pour relever certains de ces défis et proposer des architectures matérielles reconfigurables dédiées pour des applications embarquées de traitement vidéo temps-réel. Premièrement, nous avons implémenté une architecture parallèle flexible avec deux contributions principales : (1) Nous avons proposé un modèle générique de distribution/collecte de pixels pour résoudre le problème de transfert de données à haut débit à travers le système. Les paramètres du modèle requis sont tout d’abord définis puis la génération de l’architecture a été automatisée pour minimiser le temps de développement. (2) Nous avons appliqué une technique d’ajustement de la fréquence pour réduire la consommation d’énergie. Nous avons dérivé les équations nécessaires pour calculer le niveau maximum de parallélisme ainsi que les équations utilisées pour calculer la taille des FIFO pour le passage d’un domaine de l’horloge à un autre. Au fur et à mesure que le nombre de cellules logiques sur une seule puce FPGAaugmente, passer à des niveaux d’abstraction plus élevés devient inévitable pour réduire la contrainte de « time-to-market » et augmenter la productivité des concepteurs. Pendant la phase de conception, l’espace de solutions architecturales présente un grand nombre d’alternatives avec des performances différentes en termes de temps d’exécution, ressources matérielles, consommation d’énergie, etc. Face à ce défi, nous avons développé l’outil ViPar avec deux contributions principales : (1) Un modèle empirique a été introduit pour estimer la consommation d’énergie basé sur l’utilisation du matériel (Slice et BRAM) et la fréquence de fonctionnement ; en plus de cela, nous avons dérivé les équations pour estimer les ressources matérielles et le temps d’exécution pour chaque alternative au cours de l’exploration de l’espace de conception. (2) En définissant les principales caractéristiques de l’architecture parallèle comme le niveau de parallélisme, le nombre de ports d’entrée/sortie, le modèle de distribution des pixels, ..., l’outil ViPar génère automatiquement l’architecture matérielle pour les solutions les plus pertinentes. Dans le cadre d’une collaboration industrielle avec NAVYA, nous avons utilisé l’outil ViPar pour implémenter une solution matérielle parallèle pour l’algorithme de stéréo matching « Multi-window Sum of Absolute Difference ». Dans cette implémentation, nous avons présenté un ensemble d’étapes pour modifier le code de description de haut niveau afin de l’adapter efficacement à l’implémentation matérielle. Nous avons également exploré l’espace de conception pour différentes alternatives en termes de performance, ressources matérielles, fréquence, et consommation d’énergie. Au cours de notre travail, les architectures matérielles ont été implémentées et testées expérimentalement sur la plateforme d’évaluation Xilinx Zynq ZC706. / Embedded video applications are now involved in sophisticated transportation systems like autonomous vehicles. Many challenges faced the designers to build those applications, among them: complex algorithms should be developed, verified and tested under restricted time-to-market constraints, the necessity for design automation tools to increase the design productivity, high computing rates are required to exploit the inherent parallelism to satisfy the real-time constraints, reducing the consumed power to extend the operating duration before recharging the vehicle, etc. In this thesis work, we used FPGA technologies to tackle some of these challenges to design parallel reconfigurable hardware architectures for embedded video streaming applications. First, we implemented a flexible parallel architecture with two main contributions: (1)We proposed a generic model for pixel distribution/collection to tackle the problem of the huge data transferring through the system. The required model parameters were defined then the architecture generation was automated to minimize the development time. (2) We applied frequency scaling as a technique for reducing power consumption. We derived the required equations for calculating the maximum level of parallelism as well as the ones used for calculating the depth of the inserted FIFOs for clock domain crossing. As the number of logic cells on a single FPGA chip increases, moving to higher abstraction design levels becomes inevitable to shorten the time-to-market constraint and to increase the design productivity. During the design phase, it is common to have a space of design alternatives that are different from each other regarding hardware utilization, power consumption and performance. We developed ViPar tool with two main contributions to tackle this problem: (1) An empirical model was introduced to estimate the power consumption based on the hardware utilization (Slice and BRAM) and the operating frequency. In addition to that, we derived the equations for estimating the hardware resources and the execution time for each point during the design space exploration. (2) By defining the main characteristics of the parallel architecture like parallelism level, the number of input/output ports, the pixel distribution pattern, etc. ViPar tool can automatically generate the parallel architecture for the selected designs for implementation. In the context of an industrial collaboration, we used high-level synthesis tools to implement a parallel hardware architecture for Multi-window Sum of Absolute Difference stereo matching algorithm. In this implementation, we presented a set of guiding steps to modify the high-level description code to fit efficiently for hardware implementation as well as we explored the design space for different alternatives in terms of hardware resources, performance, frequency and power consumption. During the thesis work, our designs were implemented and tested experimentally on Xilinx Zynq ZC706 (XC7Z045- FFG900) evaluation board.
123

Implementing an Interactive Simulation Data Pipeline for Space Weather Visualization

Berg, Matthias, Grangien, Jonathan January 2018 (has links)
This thesis details work carried out by two students working as contractors at the Community Coordinated Modelling Center at Goddard Space Flight Center of the National Aeronautics and Space Administration. The thesis is made possible by and aims to contribute to the OpenSpace project. The first track of the work implemented is the handling of and putting together new data for a visualization of coronal mass ejections in OpenSpace. The new data allows for observation of coronal mass ejections at their origin by the surface of the Sun, whereas previous data visualized them from 30 solar radii out from the Sun and outwards. Previously implemented visualization techniques are used together to visualize different volume data and fieldlines, which together with a synoptic magnetogram of the Sun gives a multi-layered visualization. The second track is an experimental implementation of a generalized and less user involved process for getting new data into OpenSpace, with a priority on volume data as that was a subject of experience. The results show a space weather model visualization, and how one such model can be adapted to fit within the parameters of the OpenSpace project. Additionally, the results show how a GUI connected to a series of background events can form a data pipeline to make complicated space weather models more easily available.
124

Design Space Exploration and Architecture Design for Inference and Training Deep Neural Networks

Qi, Yangjie January 2021 (has links)
No description available.
125

Range-based Wireless Sensor Network Localization for Planetary Rovers

Svensson, August January 2020 (has links)
Obstacles faced in planetary surface exploration require innovation in many areas, primarily that of robotics. To be able to study interesting areas that are by current means hard to reach, such as steep slopes, ravines, caves andlava tubes, the surface vehicles of today need to be modified or augmented. Oneaugmentation with such a goal is PHALANX (Projectile Hordes for AdvancedLong-term and Networked eXploration), a prototype system being developed atthe NASA Ames Research Center. PHALANX uses remote deployment of expendablesensor nodes from a lander or rover vehicle. This enables in-situ measurementsin hard-to-reach areas with reduced risk to the rover. The deployed sensornodes are equipped with capabilities to transmit data wirelessly back to therover and to form a network with the rover and other nodes. Knowledge of the location of deployed sensor nodes and the momentary locationof the rover is greatly desired. PHALANX can be of aid in this aspect as well.With the addition of inter-node and rover-to-node range measurements, arange-based network SLAM (Simultaneous Localization and Mapping) system can beimplemented for the rover to use while it is driving within the network. Theresulting SLAM system in PHALANX shares characteristics with others in the SLAM literature, but with some additions that make it unique. One crucial additionis that the rover itself deploys the nodes. Another is the ability for therover to more accurately localize deployed nodes by external sensing, such asby utilizing the rover cameras. In this thesis, the SLAM of PHALANX is studied by means of computer simulation.The simulation software is created using real mission values and valuesresulting from testing of the PHALANX prototype hardware. An overview of issuesthat a SLAM solution has to face as present in the literature is given in thecontext of the PHALANX SLAM system, such as poor connectivity, and highlycollinear placements of nodes. The system performance and sensitivities arethen investigated for the described issues, using predicted typical PHALANXapplication scenarios. The results are presented as errors in estimated positions of the sensor nodesand in the estimated position of the rover. I find that there are relativesensitivities to the investigated parameters, but that in general SLAM inPHALANX is fairly insensitive. This gives mission planners and operatorsgreater flexibility to prioritize other aspects important to the mission athand. The simulation software developed in this thesis work also has thepotential to be expanded on as a tool for mission planners to prepare forspecific mission scenarios using PHALANX.
126

Simulation and Study of Gravity Assist Maneuvers / Simulering och studie av gravitationsassisterade manövrar

Santos, Ignacio January 2020 (has links)
This thesis takes a closer look at the complex maneuver known as gravity assist, a popular method of interplanetary travel. The maneuver is used to gain or lose momentum by flying by planets, which induces a speed and direction change. A simulation model is created using the General Mission Analysis Tool (GMAT), which is intended to be easily reproduced and altered to match any desired gravity assist maneuver. The validity of its results is analyzed, comparing them to available data from real missions. Some parameters, including speed and trajectory, are found to be extremely reliable. The model is then used as a tool to investigate the way that different parameters impact this complex environment, and the advantages of performing thrusting burns at different points during the maneuver are explored. According to theory, thrusting at the point of closest approach to the planet is thought to be the most efficient method for changing speed and direction of flight. However, the results from this study show that thrusting before this point can have some major advantages, depending on the desired outcome. The reason behind this is concluded to be the high sensitivity of the gravity assist maneuver to the altitude and location of the point of closest approach. / Detta examensarbete tittar närmare på den komplexa manöver inom banmekanik som kallas gravitationsassisterad manöver, vilken är vanligt förekommande vid interplanetära rymduppdrag. Manövern används för att öka eller minska farkostens rörelsemängd genom att flyga förbi nära planeter, vilket ger upphov till en förändring i fart och riktning. En simuleringsmodell är skapad i NASAs mjukvara GMAT med syftena att den ska vara reproducerbar samt möjlig att ändra för olika gravitationsassisterade manövrar. Resultaten från simuleringarna är validerade mot tillgängliga data från riktigt rymduppdrag. Vissa parametrar, som fart och position, har en väldigt bra överenstämmelse. Modellen används sedan för att noggrannare undersöka hur olika parametrar påverkar det komplexa beteendet vid en graviationsassisterad manöver, genom att specifikt titta på effekterna av en pålagd dragkraft från motorn under den gravitationsassisterade manövern. Teoretiskt fås mest effekt på fart och riktning om dragkraften från motorn läggs på vid punkten närmast planeten. Resultaten från denna studie visar att beroende på vilken parameter man vill ändra så kan man erhålla mer effekt genom att lägga på dragkraften innan den närmsta punkten. Förklaringen till detta är att den gravitationsassisterade manövern är väldigt icke-linjär, så en tidigare pålagd dragkraft kan kraftigt förändra farkostens bana nära planeten, så att farkosten t.ex. kommer närmare och då påverkas mer.
127

Throughput Constrained and Area Optimized Dataflow Synthesis for FPGAs

Sun, Hua 21 February 2008 (has links) (PDF)
Although high-level synthesis has been researched for many years, synthesizing minimum hardware implementations under a throughput constraint for computationally intensive algorithms remains a challenge. In this thesis, three important techniques are studied carefully and applied in an integrated way to meet this challenging synthesis requirement. The first is pipeline scheduling, which generates a pipelined schedule that meets the throughput requirement. The second is module selection, which decides the most appropriate circuit module for each operation. The third is resource sharing, which reuses a circuit module by sharing it between multiple operations. This work shows that combining module selection and resource sharing while performing pipeline scheduling can significantly reduce the hardware area, by either using slower, more area-efficient circuit modules or by time-multiplexing faster, larger circuit modules, while meeting the throughput constraint. The results of this work show that the combined approach can generate on average 43% smaller hardware than possible when a single technique (resource sharing or module selection) is applied. There are four major contributions of this work. First, given a fixed throughput constraint, it explores all feasible frequency and data introduction interval design points that meet this throughput constraint. This enlarged pipelining design space exploration results in superior hardware architectures than previous pipeline synthesis work because of the larger sapce. Second, the module selection algorithm in this work considers different module architectures, as well as different pipelining options for each architecture. This not only addresses the unique architecture of most FPGA circuit modules, it also performs retiming at the high-level synthesis level. Third, this work proposes a novel approach that integrates the three inter-related synthesis techniques of pipeline scheduling, module selection and resource sharing. To the author's best knowledge, this is the first attempt to do this. The integrated approach is able to identify more efficient hardware implementations than when only one or two of the three techniques are applied. Fourth, this work proposes and implements several algorithms that explore the combined pipeline scheduling, module selection and resource sharing design space, and identifies the most efficient hardware architecture under the synthesis constraint. These algorithms explore the combined design space in different ways which represents the trade off between algorithm execution time and the size of the explored design space.
128

A Method for Standardization within the Payload Interface Definition of a Service-Oriented Spacecraft using a Modified Interface Control Document​ / En metod för standardisering av nyttolastgränsyta för en service-orienterad rymdfarkost via ett modifierat dokumentet för gränssnittskontroll

Klicker, Laura January 2017 (has links)
With a big picture view of increasing the accessibility of space, standardization is applied within a service-oriented space program. The development of standardized spacecraft interfaces for numerous and varied payloads is examined through the lens of the creation of an Interface Control Document (ICD) within the Peregrine Lunar Lander project of Astrobotic Technologies, Inc. The procedure is simple, transparent, and adaptable; its applicability to other similar projects is assessed. / För en ökad tillgång till rymden finns det behov av standardisering för en förbättrad service. Utvecklingen av standardiserade rymdfarkostgränsytor för flera och olika nyttolaster har undersökts via ett dokumentet för gränssnittskontroll (ICD) inom projektet Peregrine Lunar Lander för Astrobotic Technologies, Inc. Proceduren är enkel, transparent och anpassningbar; dess användning för andra liknande projekt har värderats.
129

Evaluation of Potential Propulsion Systems for a Commercial Micro Moon Lander

Papavramidis, Konstantinos January 2019 (has links)
In the advent of Space 4.0 era with the commercialization and increased accessibility of space, a requirement analysis, trade-off options, development status and critical areas of a propulsion system for a Commercial Micro Moon Lander is carried out. An investigation of a suitable system for the current mission is examined in the frame of the ASTRI project of OHB System AG and Blue Horizon. Main trajectory strategies are being investigated and simulations are performed to extract the ∆V requirements. Top-level requirements are extracted which give the first input for the propulsion design. An evaluation of the propulsion requirements is implemented which outlines the factors that are more important and drive the propulsion design. The evaluation implements a dual comparison of the requirements where weighting factors are extracted, resulting the main drivers of the propulsion system design. A trade-off analysis is performed for various types of propulsion systems and a preliminary selection of a propulsion system suitable for the mission is described. A first-iteration architecture of the propulsion, ADCS and GNC subsystems are also presented as well as a component list. A first approach of the landing phase is described and an estimation of the required thrust is calculated. A unified Bipropellant propulsion system is proposed which fills out most of the mission requirements. However, the analysis shows that the total mass of the lander, including all the margins, exceeds a bit the mass limitations but no the volume limitations. The results shows that a decrease in payload capacity or the implementation of a different trajectory strategy can lower the mass below the limit. In addition, further iterations in the lander concept which will give a more detailed design, resulting to no extra margins, can drive the mass below the limit. Finally, a discussion on the results is done, addressing the limitations and the important factors that need to be considered for the mission. The viability of the mission due to its commercial aspect is being questioned and further investigation is suggested to be carried out on the ”micro” lander concept. / I tillkomsten av Space 4.0 era med kommersialisering och ökad tillgänglighet av rymden, en kravanalys, avvägningsalternativ, utvecklingsstatus och kritiska områden av ett framdrivningssystem för en kommersiell mikro månlandare bärs ut. En undersökning av ett lämpligt system för det aktuella uppdraget genomförs inom ramen för ASTRI-projektet för OHB System AG och Blue Horizon. Olika strategier för banor undersöks och simuleringar utförs för att extrahera ΔV-kraven. Topp-nivå krav definieras och ger den första inputen för designen av framdrivningssystemet. En utvärdering av framdrivningskraven implementeras och belyser de viktigaste faktorer som driver design av framdrivningssystemet. En avvägningsanalys utförs för olika typer av framdrivningssystem och ett preliminärt urval av ett framdrivningssystem som är lämpligt för uppdraget beskrivs. En arkitektur för framdrivningen, ADCS och GNC-delsystem presenteras såväl som en komponentlista. Ett första tillvägagångssätt av landningsfasen beskrivs och en uppskattning av den nödvändiga dragkraften beräknas. Ett enhetligt Bi-propellant framdrivningssystem föreslås som uppfyller ut de flesta uppdragskraven. Analysen visar dock att summan av månlandarens massa, inklusive alla marginaler, överstiger massbegränsningarna men inte de volymbegränsningarna uppsatta i projektet. Resultaten visar att en minskning av nyttolastkapaciteten eller genomförandet av en annan banstrategi kan minska den totala massan då den inom gränsvärdena. Dessutom, ytterligare iterationer i månlandarens koncept som kommer att ge en mer detaljerad design, vilket resulterar i inga extra marginaler, kan leda till att den uppskattade massan minskar ytterligare. Slutligen förs en diskussion om resultaten, med hänsyn till de begränsningarna och de viktigaste faktorerna som måste beaktas för uppdraget. Lönsamheten hos uppdraget på grund av sin kommersiella aspekt är ifrågasatt och vidare utredning föreslås utförs på ”mikro” månlandare konceptet.
130

Design space exploration for co-mapping of periodic and streaming applications in a shared platform / Validering av designlösningar för utforskning av rymden för samkartläggning av periodiska och strömmande applikationer i en delad plattform

Yuhan, Zhang January 2023 (has links)
As embedded systems advance, the complexity and multifaceted requirements of products have increased significantly. A trend in this domain is the selection of different types of application models and multiprocessors as the platform. However, limited design space exploration techniques often perform one particular model, and combining diverse application models may cause compatibility issues. Additionally, embedded system design inherently involves multiple objectives. Beyond the essential functionalities, other metrics always need to be considered, such as power consumption, resource utilization, cost, safety, etc. The consideration of these diverse metrics results in a vast design space, so effective design space exploration also plays a crucial role. This thesis addresses these challenges by proposing a co-mapping approach for two distinct models: the periodically activated tasks model for real-time applications and the synchronous dataflow model for digital signal processing. Our primary goal is to co-map these two kinds of models onto a multi-core platform and explore trade-offs between the solutions. We choose the number of used resources and throughput of the synchronous dataflow model as our performance metrics for assessment. We adopt a combination method in which periodic tasks are given precedence to ensure their deadlines are met. The remaining processor resources are then allocated to the synchronous dataflow model. Both the execution of periodic tasks and the synchronous dataflow model are managed by a scheduler, which prevents resource contention and optimizes the utilization of available processor resources. To achieve a balance between different metrics, we implement Pareto optimization as a guiding principle in our approach. This thesis uses the IDeSyDe tool, an extension of the ForSyDe group’s current design space exploration tool, following the Design Space Identification methodology. Implementation is based on Scala and Python, running on the Java virtual machine. The experiment results affirm the successful mapping and scheduling of the periodically activated tasks model and the synchronous dataflow model onto the shared multi-processor platform. We find the Pareto-optimal solutions by IDeSyDe, strategically aiming to maximize the throughput of synchronous dataflow while concurrently minimizing resource consumption. This thesis serves as a valuable insight into the application of different models on a shared platform, particularly for developers interested in utilizing IDeSyDe. However, due to time constraints, our test case may not fully encompass the potential scalability of our thesis method. Additional tests can demonstrate the better effectiveness of our approach. For further reference, the code can be checked in the GitHub repository at*. / Allt eftersom inbyggda system utvecklas, blir komplexiteten och de mångfacetterade kraven av produkter har ökat avsevärt. En trend inom detta område är urval av olika typer av applikationsmodeller och multiprocessorer som plattformen. Dock begränsad design utrymme utforskning tekniker ofta utföra en viss modell, och kombinera olika applikationsmodeller kan orsaka kompatibilitetsproblem. Dessutom inbyggt systemdesign i sig involverar flera mål. Utöver de väsentliga funktionerna, andra mätvärden måste alltid beaktas, såsom strömförbrukning, resurs användning, kostnad, säkerhet, etc. Övervägandet av dessa olika mätvärden resulterar i ett stort designutrymme spelar så effektiv designrumsutforskning också en avgörande roll roll. Denna avhandling tar upp dessa utmaningar genom att föreslå en samkartläggning tillvägagångssätt för två distinkta modeller: modellen med periodiskt aktiverade uppgifter för realtidsapplikationer och den synkrona dataflödesmodellen för digital signal bearbetning. Vårt primära mål är att samkarta dessa två typer av modeller på en multi-core plattform och utforska avvägningar mellan lösningarna. Vi väljer antalet använda resurser och genomströmning av det synkrona dataflödet modell som vårt prestationsmått för bedömning. Vi använder en kombinationsmetod där periodiska uppgifter ges företräde för att säkerställa att deras tidsfrister hålls. Den återstående processorn resurser allokeras sedan till den synkrona dataflödesmodellen. Både utförandet av periodiska uppgifter och den synkrona dataflödesmodellen är hanteras av en schemaläggare, vilket förhindrar resursstrid och optimerar utnyttjandet av tillgängliga processorresurser. För att uppnå en balans mellan olika mått, implementerar vi Pareto-optimering som en vägledande princip i vårt tillvägagångssätt. Denna avhandling använder verktyget IDeSyDe, en förlängning av ForSyDe gruppens nuvarande verktyg för utforskning av designutrymme, efter Design Space Identifieringsmetodik. Implementeringen är baserad på Scala och Python, körs på den virtuella Java-maskinen. Experimentresultaten bekräftar den framgångsrika kartläggningen och schemaläggningen av den periodiskt aktiverade uppgiftsmodellen och det synkrona dataflödet modell på den delade flerprocessorplattformen. Vi finner Pareto-optimal lösningar av IDeSyDe, strategiskt inriktade på att maximera genomströmningen av synkront dataflöde samtidigt som resursförbrukningen minimeras. Denna uppsats fungerar som en värdefull inblick i tillämpningen av olika modeller på en delad plattform, särskilt för utvecklare IDeSyDe. På grund av tidsbrist kanske vårt testfall inte är fullt ut omfattar den potentiella skalbarheten hos vår avhandlingsmetod. Ytterligare tester kan visa hur effektiv vår strategi är. För ytterligare referens, koden kan kontrolleras i GitHub*.

Page generated in 0.0895 seconds