Spelling suggestions: "subject:"llm"" "subject:"mlm""
71 |
Parallel Simulation of SystemC Loosely-Timed Transaction Level ModelsSotiropoulos Pesiridis, Konstantinos January 2017 (has links)
Parallelizing the development cycles of hardware and software is becoming the industry’s norm for reducing time to market for electronic devices. In the absence of hardware, software development is based on a virtual platform; a fully functional software model of a system under development, able to execute unmodified code. A Transaction Level Model, expressed with the SystemC TLM 2.0 language, is one of the many possible ways for constructing a virtual platform. Under SystemC’s simulation engine, hardware and software is being co-simulated. However, the sequential nature of the reference implementation of the SystemC’s simulation kernel, is a limiting factor. Poor simulation performance often constrains the scope and depth of the design decisions that can be evaluated. It is the main objective of this thesis’ project to demonstrate the feasibility of parallelizing the co-simulation of hardware and software using Transaction Level Models, outside SystemC’s reference simulation environment. The major obstacle identified is the preservation of causal relations between simulation events. The solution is obtained by using the process synchronization mechanism known as the Chandy/Misra/Bryantt algorithm. To demonstrate our approach and evaluate under which conditions a speedup can be achieved, we use the model of a cache-coherent, symmetric multiprocessor executing a synthetic application. Two versions of the model are used for the comparison; the parallel version, based on the Message Passing Interface 3.0, which incorporates the synchronization algorithm and an equivalent sequential model based on SystemC TLM 2.0. Our results indicate that by adjusting the parameters of the synthetic application, a certain threshold is reached, above which a significant speedup against the sequential SystemC simulation is observed. Although performed manually, the transformation of a SystemC TLM 2.0 model into a parallel MPI application is deemed feasible.
|
72 |
Techniques et outils pour la vérification de Systèmes-sur-Puce au niveau transactionMoy, Matthieu 09 December 2005 (has links) (PDF)
Les travaux présentés dans ce document portent sur la vérification<br />de modèles de systèmes sur puce, au niveau transactionnel (TLM).<br />Nous présentons le niveau transactionnel et ses variantes, et<br />rappelons en quoi ce nouveau niveau d'abstraction est aujourd'hui<br />nécessaire en plus du niveau de transfert de registre (RTL) pour<br />répondre aux contraintes de productivités et de qualités de plus en<br />plus fortes, et comment il s'intègre dans le flot de conception.<br /><br />Nous présentons un nouvel outil, LusSy, permettant la vérification<br />formelle de modèles transactionnels écrits en SystemC. Sa structure<br />interne s'apparente à celle d'un compilateur: Une partie frontale,<br />Pinapa, qui lit le programme source, une extraction de la<br />sémantique, Bise, dans notre formalisme intermédiaire \hpiom, une<br />série d'optimisations dans le composant Birth, et des générateurs<br />de code pour les outils de preuves pour Lustre et SMV.<br /><br />Lussy est conçu et écrit de manière à avoir aussi peu de limitation<br />que possible sur la forme du code SystemC accepté en entrée. \pinapa<br />utilise une approche innovante qui lui permet de s'affranchir de la<br />plupart des limitations dont souffrent les outils similaires.<br />L'extraction de la sémantique implémente plusieurs constructions TLM<br />qu'aucun autre outil disponible aujourd'hui ne gère. Il ne demande<br />pas d'annotation manuelle du code source, toute la chaîne étant<br />entièrement automatisée.<br /><br />Lussy est capable de prouver formellement des propriétés sur des<br />modèles de petites taille, et ses composants sont réutilisables pour<br />des outils de preuve compositionnelle, ou d'analyse de code autre<br />que le model-checking qui passeront mieux à l'échelle que l'approche<br />actuelle.<br /><br />Nous présentons les principes de chaque étape de la transformation,<br />ainsi que notre implémentation. Les résultats sont donnés pour des<br />exemples simples et petits, et pour une étude de cas de taille<br />moyenne, EASY. Les expérimentations avec Lussy nous ont permis de<br />comparer les différents outils de preuves que nous avons utilisés,<br />et d'évaluer l'efficacité des optimisations que nous avons<br />implémentées.
|
73 |
Application de la méthode TLM à la modélisation de la propagation acoustique en milieu urbainGuillaume, Gwenaël 13 October 2009 (has links) (PDF)
Le bruit constitue un problème sociétal majeur, en particulier en zones urbaines et périurbaines où les sources de bruit associées au traffic routier sont nombreuses et variées. Les logiciels de prévision acoustique actuels, basés sur des modèles énergétiques et géométriques et développés initialement pour des applications en milieux extérieurs faiblement bâtis, sont donc limités pour la prévision acoustique en milieux urbains et périurbains (présence de bâtis et d'encombrements, sources de bruit réelles mobiles avec un régime de fonctionnement variant dans le temps...). Le travail de thèse a consisté à proposer un modèle numérique temporel, adapté à la modélisation de la propagation acoustique en milieu urbain. Parmi les méthodes envisageables, la méthode TLM ("Transmission Line Modelling") constitue une approche originale, puisqu'elle permet de considérer des domaines de propagation de géométries complexes en intégrant la plupart des phénomènes physiques mis en jeu lors de la propagation du son sur de grandes distances (diffraction, réflexion, phénomènes stationnaires, divergence géométrique, atténuation atmosphérique, effets micrométéorologiques). Toutefois, l'étude bibliographique a mis en évidence deux limitations majeures de la méthode pour répondre pleinement à notre problématique : l'implémentation de conditions aux frontières réalistes et la modélisation d'un milieu de propagation infini. Un modèle TLM générique a ainsi été développé, et permet de réaliser des simulations en deux ou en trois dimensions en combinant l'ensemble des phénomènes influant sur la propagation du son en milieux extérieurs densément bâtis. Une approche permettant d'implémenter une condition d'impédance aux frontières a également été proposée. La méthode consiste à approcher l'impédance par une somme de systèmes linéaires du premier ordre. L'usage d'une méthode de convolution récursive permet par ailleurs de limiter le coût numérique associé au calcul du champ de pression sonore sur la frontière. Des simulations de la propagation acoustique au-dessus de différents types de sols absorbants ont été réalisées et confrontées avec succès aux solutions analytiques. Concernant la modélisation d'un milieu de propagation infini, une formulation de couches absorbantes anisotropes permettant de limiter le domaine de calcul a également été développée. Enfin, des applications réalistes de problématiques "urbaines" (écrans acoustiques, façades et terrasses végétalisées) ont finalement été proposées.
|
74 |
Productivity, Cost and Environmental Damage of Four Logging Methods in Forestry of Northern IranBadraghi, Naghimeh 04 July 2014 (has links) (PDF)
Increasing productivity, reducing cost, reducing soil damage, reducing the impact of harvesting on standing tree and regeneration are all very important objectives in ground skidding system in the management of the Hyrcanian forest. The research carried out to obtain these objectives included four logging methods, tree length method (TLM), long length method (LLM), short length method (SLM), and wood extraction by mule (mule) in northern Iran. In order to determine the cost per unit, time study techniques were used for each harvesting method, time study data are shifted to logarithmic data based on 10. On the basis of the developed models simulated, 11 skidding turns are simulated and the unit cost are estimated depending on the diameter of the log (DL), skidding distance (SD), and the winching distance (WD) for 11 different cycles with TLM, LLM and SLM.
The results showed that on average, the net costs per extraction of one cubic meter of wood were 3.06, 5.69, 6.81 and 34.36 €/m3 in TLM, LLM, SLM and mule. The costs depending on diameter of log (DL), skidding distance (SD) and winching distance (WD) showed that the most economical alternative for Northern Iran is TLM. In the cut-to-length system, the costs of both alternatives LLM, SLM were significantly dependent on DL. , thus the result of this study suggests that as long as the diameter of the felled trees is less than 40 cm, the cut-to-length system is not an economical alternative, whilst the cut-to-length method can be applied for trees with a diameter more than 40 cm. Where diameters are more than 40 cm TLM it is more economical than SLM, however it was not significantly different. Depending on SD in short skidding distance SLM is preferable to LLM but in cases of long skidding distance LLM is more economical than SLM. The winching distance affect was not a factor on cost.
To assess the damage on seedlings and standing trees a 100% inventory method was employed in pre-hauling and post-hauling, alongside of skidding trails, winching strips and mule hauling with a 12m width. To chose the best alternative depending on standing damage the Analysis of multiple criterial approval (MA) was applied. The amount of trees damaged by winching operation were 11.89% in TLM, 14.44% in LLM 27.59%, SLM and 0 stem and by skidding operation were 16.73%, 3.13% and 8.78% of total trees in TLM, LLM and SLM. In the winching area about 14%, 20%, 21% and 6 % of the total regeneration was damaged by TLM, LLM, SLM and mule and the skidding operation damaged 7.5% in TLM, 7.4 % LLM and 9.4% in SLM. The friendliest alternative to residual standing was mule but in manual method (where the wood extraction is done by skidder) MA showed that the best alternative depending on residual damage is LLM.
To determine the degree of soil compaction a core sampling technique of bulk density was used. Soil samples collected from the horizontal face of a soil pit at 10 cm depth soil core, at 50m intervals on skid trials, in winching strips and control are (no vehicles pass) a soil sample was taken at 10m intervals in the hauling direction of the mule. In order to determine the post-harvesting extent of disturbance on skidding trails by skidding operations, the disturbed widths were measured at 50 m intervals along the skid trails. In the winching area, where the winched logs created a streak of displaced soil, the width of the displaced streak was measured at 5 m interval along the winching strip. In mule hauling operations the width of a streak created by a mule foot track was measured at 10 m intervals.
To compare increased average bulk density between alternatives one way The ANOVA, Duncan test and Dunnett t-test with a 95 % confidence level were used. A General linear model was applied to relate the increasing bulk density and the slope gradient. To realize the correlation between the increment of soil bulk density and the slope gradient and the correlation between the soil compaction and soil moisture content (%) The Pearson correlation test was applied. To choose the best alternative (in manual method) a MA test was applied again. The bulk density on the skidding trail increased 51 % for 30 skidding turn, 35 % for 31 skidding turn (one unloaded and one loaded pass) and 46% for 41 skidding turn. Results of ANOVA (p < 0.05) show significant differences of bulk density between alternatives. Duncan test and the Dunnett t-test indicated that the increasing soil bulk density was not significant between control samples and winching strip of TLM and extraction by mule samples.
The general linear modeling and Pearson correlation test results indicated that the slope gradient had an insignificant effect on soil compaction, whilst the Pearson test indicates a medium negative correlation between soil compaction and percentage of soil moisture. By ground-based winching operation 0.07%, 0.03%, 0.05% and 0.002% of the total area and by ground based skidding operation 1.21%, 1.67%, 0.81% and 0.00% of total area was disturbed and compacted in TLM, LLM, SLM and mule. The Pearson correlation results show that the width of disturbed area was significantly influenced by the diameter of logs and length of logs (p ˂ 0.05), but there is no significant correlation between soil disturbance width and slope. The results of analysis of MA showed that soil compaction was not related to logging method but sensitivity analysis of MA shows that LLM and TLM are both preferable to SLM.
|
75 |
Paměťový subsystém v SystemC / SystemC Memory SubsystemMichl, Kamil January 2020 (has links)
This thesis deals with the design and implementation of a processor simulation memory subsystem. The memory subsystem is designed using the Transaction Level Modeling approach. The implementation is done in C++ language utilizing the SystemC library. The processor simulation is adopted from the Codasip company simulator. The objective is to create a functional connection between the processor and the memory inside the simulator. This connection supports communication protocols of AHB3-lite, AXI4-lite, CPB, and CPB-lite buses. The new implementation of the aforementioned connection and the memory is integrated into the original simulator. The resulting simulator is tested using unit tests.
|
76 |
Productivity, Cost and Environmental Damage of Four Logging Methods in Forestry of Northern IranBadraghi, Naghimeh 20 December 2013 (has links)
Increasing productivity, reducing cost, reducing soil damage, reducing the impact of harvesting on standing tree and regeneration are all very important objectives in ground skidding system in the management of the Hyrcanian forest. The research carried out to obtain these objectives included four logging methods, tree length method (TLM), long length method (LLM), short length method (SLM), and wood extraction by mule (mule) in northern Iran. In order to determine the cost per unit, time study techniques were used for each harvesting method, time study data are shifted to logarithmic data based on 10. On the basis of the developed models simulated, 11 skidding turns are simulated and the unit cost are estimated depending on the diameter of the log (DL), skidding distance (SD), and the winching distance (WD) for 11 different cycles with TLM, LLM and SLM.
The results showed that on average, the net costs per extraction of one cubic meter of wood were 3.06, 5.69, 6.81 and 34.36 €/m3 in TLM, LLM, SLM and mule. The costs depending on diameter of log (DL), skidding distance (SD) and winching distance (WD) showed that the most economical alternative for Northern Iran is TLM. In the cut-to-length system, the costs of both alternatives LLM, SLM were significantly dependent on DL. , thus the result of this study suggests that as long as the diameter of the felled trees is less than 40 cm, the cut-to-length system is not an economical alternative, whilst the cut-to-length method can be applied for trees with a diameter more than 40 cm. Where diameters are more than 40 cm TLM it is more economical than SLM, however it was not significantly different. Depending on SD in short skidding distance SLM is preferable to LLM but in cases of long skidding distance LLM is more economical than SLM. The winching distance affect was not a factor on cost.
To assess the damage on seedlings and standing trees a 100% inventory method was employed in pre-hauling and post-hauling, alongside of skidding trails, winching strips and mule hauling with a 12m width. To chose the best alternative depending on standing damage the Analysis of multiple criterial approval (MA) was applied. The amount of trees damaged by winching operation were 11.89% in TLM, 14.44% in LLM 27.59%, SLM and 0 stem and by skidding operation were 16.73%, 3.13% and 8.78% of total trees in TLM, LLM and SLM. In the winching area about 14%, 20%, 21% and 6 % of the total regeneration was damaged by TLM, LLM, SLM and mule and the skidding operation damaged 7.5% in TLM, 7.4 % LLM and 9.4% in SLM. The friendliest alternative to residual standing was mule but in manual method (where the wood extraction is done by skidder) MA showed that the best alternative depending on residual damage is LLM.
To determine the degree of soil compaction a core sampling technique of bulk density was used. Soil samples collected from the horizontal face of a soil pit at 10 cm depth soil core, at 50m intervals on skid trials, in winching strips and control are (no vehicles pass) a soil sample was taken at 10m intervals in the hauling direction of the mule. In order to determine the post-harvesting extent of disturbance on skidding trails by skidding operations, the disturbed widths were measured at 50 m intervals along the skid trails. In the winching area, where the winched logs created a streak of displaced soil, the width of the displaced streak was measured at 5 m interval along the winching strip. In mule hauling operations the width of a streak created by a mule foot track was measured at 10 m intervals.
To compare increased average bulk density between alternatives one way The ANOVA, Duncan test and Dunnett t-test with a 95 % confidence level were used. A General linear model was applied to relate the increasing bulk density and the slope gradient. To realize the correlation between the increment of soil bulk density and the slope gradient and the correlation between the soil compaction and soil moisture content (%) The Pearson correlation test was applied. To choose the best alternative (in manual method) a MA test was applied again. The bulk density on the skidding trail increased 51 % for 30 skidding turn, 35 % for 31 skidding turn (one unloaded and one loaded pass) and 46% for 41 skidding turn. Results of ANOVA (p < 0.05) show significant differences of bulk density between alternatives. Duncan test and the Dunnett t-test indicated that the increasing soil bulk density was not significant between control samples and winching strip of TLM and extraction by mule samples.
The general linear modeling and Pearson correlation test results indicated that the slope gradient had an insignificant effect on soil compaction, whilst the Pearson test indicates a medium negative correlation between soil compaction and percentage of soil moisture. By ground-based winching operation 0.07%, 0.03%, 0.05% and 0.002% of the total area and by ground based skidding operation 1.21%, 1.67%, 0.81% and 0.00% of total area was disturbed and compacted in TLM, LLM, SLM and mule. The Pearson correlation results show that the width of disturbed area was significantly influenced by the diameter of logs and length of logs (p ˂ 0.05), but there is no significant correlation between soil disturbance width and slope. The results of analysis of MA showed that soil compaction was not related to logging method but sensitivity analysis of MA shows that LLM and TLM are both preferable to SLM.
|
77 |
Enhancing numerical modelling efficiency for electromagnetic simulation of physical layer componentsSasse, Hugh Granville January 2010 (has links)
The purpose of this thesis is to present solutions to overcome several key difficulties that limit the application of numerical modelling in communication cable design and analysis. In particular, specific limiting factors are that simulations are time consuming, and the process of comparison requires skill and is poorly defined and understood. When much of the process of design consists of optimisation of performance within a well defined domain, the use of artificial intelligence techniques may reduce or remove the need for human interaction in the design process. The automation of human processes allows round-the-clock operation at a faster throughput. Achieving a speedup would permit greater exploration of the possible designs, improving understanding of the domain. This thesis presents work that relates to three facets of the efficiency of numerical modelling: minimizing simulation execution time, controlling optimization processes and quantifying comparisons of results. These topics are of interest because simulation times for most problems of interest run into tens of hours. The design process for most systems being modelled may be considered an optimisation process in so far as the design is improved based upon a comparison of the test results with a specification. Development of software to automate this process permits the improvements to continue outside working hours, and produces decisions unaffected by the psychological state of a human operator. Improved performance of simulation tools would facilitate exploration of more variations on a design, which would improve understanding of the problem domain, promoting a virtuous circle of design. The minimization of execution time was achieved through the development of a Parallel TLM Solver which did not use specialized hardware or a dedicated network. Its design was novel because it was intended to operate on a network of heterogeneous machines in a manner which was fault tolerant, and included a means to reduce vulnerability of simulated data without encryption. Optimisation processes were controlled by genetic algorithms and particle swarm optimisation which were novel applications in communication cable design. The work extended the range of cable parameters, reducing conductor diameters for twisted pair cables, and reducing optical coverage of screens for a given shielding effectiveness. Work on the comparison of results introduced ―Colour maps‖ as a way of displaying three scalar variables over a two-dimensional surface, and comparisons were quantified by extending 1D Feature Selective Validation (FSV) to two dimensions, using an ellipse shaped filter, in such a way that it could be extended to higher dimensions. In so doing, some problems with FSV were detected, and suggestions for overcoming these presented: such as the special case of zero valued DC signals. A re-description of Feature Selective Validation, using Jacobians and tensors is proposed, in order to facilitate its implementation in higher dimensional spaces.
|
78 |
Digital Twin for Firmware and Artificial Intelligence prototypingMaragno, Gianluca January 2023 (has links)
The forth industrial revolution has risen the born of new mega trends for the improvement of the time to market and the spare of resources in the development and manufacturing of a new product. Among these trends, the Digital Twin (DT) is the one of major interests for developers and strategy analysts. The perfect transposition of a real entity into a digital environment enables the exploration and testing of the different components within the defined object, taking a further step towards a perfect correct-by-design approach. STMicroelectronics (ST) is exploring the benefits that this technology offers to the developers. The company’s primary focus revolves around the creation of SystemC models for the manufactured components so that a co-simulation between an Hardware (HW)/Software (SW) platform and a kinematic simulator is possible. This innovative approach facilitate the comprehensive validation of the designed Firmware (FW), relying on the intricate interplay with sensory aspects influenced by both device behavior and environmental circumstances. Furthermore, many applications nowadays implement an Artificial Intelligence (AI) algorithm: its performance is strictly dependent on the quality of the signals sensed and on the dataset on which the model is built. The creation of a proper DT allows to implement its development during the design phase, creating not only a valid AI for the real product, but also improving the quality and the performance of the model built. This conclusion is proven through the construction of a simple robotic arm implementing an anomaly detection algorithm based on a Machine Learning (ML) model. / Den fjärde industriella revolutionen har gett upphov till nya megatrender för förbättring av time-to-market och spara resurser vid utveckling och tillverkning av tillverkning av en ny produkt. Bland dessa trender är DT av stort intresse för utvecklare och strategianalytiker. Den perfekta överföringen av en verklig enhet till en digital miljö gör det möjligt att utforska och testa de olika komponenter inom det definierade objektet, vilket tar ytterligare ett steg mot en perfekt korrekt-från-design-metod. ST utforskar fördelarna som denna teknologi erbjuder utvecklare. Företagets huvudsakliga fokus kretsar kring skapandet av SystemC-modeller för tillverkade komponenter så att en samkörning mellan en HW/SW och en kinematisk simulator blir möjlig. Denna innovativa metod underlättar den omfattande valideringen av utformad FW och bygger på den intrikata interaktionen med sensoriska aspekter som påverkas av både enhetens beteende och miljöförhållanden. Dessutom implementerar många applikationer nuförtiden en algoritm för AI: dess prestanda är strikt beroende av kvaliteten på de uppfångade signalerna och den dataset på vilken modellen bygger. Skapandet av en korrekt DT möjliggör genomförandet av detta steg under designfasen, vilket inte bara resulterar i en giltig AI för den verkliga produkten utan också förbättrar kvaliteten och prestandan hos den skapade modellen. Denna slutsats bevisas genom konstruktionen av en enkel robotarm som implementerar en algoritm för avvikelsedetektering baserad på en ML model.
|
79 |
Une approche de modélisation au niveau système pour la conception et la vérification de systèmes sur puce à faible consommation / An electronic system level modeling approach for the design and verification of low-power systems-on chipMbarek, Ons 29 May 2013 (has links)
Une solution de gestion de puissance d’un système sur puce peut être définie par une architecture de faible puissance composée de multiples domaines d'alimentation et de leur stratégie de gestion. Si ces deux éléments sont économes en énergie, une solution efficace en énergie peut être obtenue. Cette approche nécessite l’ajout d’éléments structurels de puissance et de leurs comportements. Une stratégie de gestion doit respecter les dépendances structurelles et fonctionnelles dues au placement physique des domaines d'alimentation. Cette relation forte entre l'architecture et sa stratégie de gestion doit être analysée tôt dans le flot de conception pour trouver la solution de gestion de puissance la plus efficace. De récentes normes de conception basse consommation définissent des sémantiques pour la spécification, simulation et vérification d’architecture de faible puissance au niveau transfert de registres (RTL). Mais elles manquent une sémantique d’interface de gestion des domaines d'alimentation réutilisable ce qui alourdit l’exploration. Leurs sémantiques RTL ne sont pas aussi utilisables au niveau transactionnel pour une exploration plus rapide et facile. Pour combler ces lacunes, cette thèse étend ces normes et fournit une étude complète des possibilités d'optimisation de puissance basées sur la composition et la gestion des domaines d'alimentation pour des modèles fonctionnels transactionnels utilisant un environnement commun USLPAF. USLPAF comprend une méthodologie alliant conception et vérification des modèles transactionnels de faible consommation, ainsi qu’une bibliothèque de techniques de modélisation et fonctions prédéfinies pour appliquer cette méthodologie. / A SoC power management solution can be defined by a low-power architecture composed of multiple power domains and a power management strategy for power domains states control. If these two elements are energy-efficient, an energy-efficient solution can be obtained. This approach requires inferring power structural elements and their related behavior in the chip internal logic. A strategy adjusting the power domains states must respect structural and functional dependencies due to the physical power domains composition. This strong relationship between power architecture and its management strategy must be explored at early design stages to find the most energy-efficient solution. Low-power design standards have recently enabled low-power architecture exploration starting from the Register Transfer Level (RTL) by defining semantics to specify power architecture, simulate and check its behavior along with the initial functional one. But, these standards miss semantics for reusable power domain control interface making power management strategies exploration tedious. The RTL-based semantics defined by these standards constrain also their use at Transaction-Level of Modeling (TLM) for fast and easy exploration. This dissertation proposes extensions to low-power standards to fill these gaps. It provides a complete study of power optimization opportunities based on composition and management of power domains in Transaction-Level (TL) functional models within a common USLPAF framework. USLPAF includes a methodology that combines design and verification of TL low-power models. To apply this methodology, USLPAF incorporates a library of modeling techniques and built-in features.
|
80 |
Simulation de haut niveau de systèmes d'exploitations distribués pour l'exploration matérielle et logicielle d'architectures multi-noeuds hétérogènesHuck, Emmanuel 25 November 2011 (has links) (PDF)
Concevoir un système embarqué implique de trouver un compromis algorithme/architecture en fonction des contraintes temps-réel. Thèse : pour concevoir un MPSoC et plus particulièrement avec les circuits reconfigurables modifiant le support d'exécution en cours de fonctionnement, la nécessaire validation des comportements fluctuants d'un système réactif impose une évaluation préalable que l'on peut réaliser par simulation (de haut niveau) tout en permettant l'exploration de l'espace de conception architectural, matériel mais aussi logiciel, au plus tôt dans le flot de conception. Le point de vue du gestionnaire de la plateforme est adopté pour explorer à haut niveau les réactions du système aux choix de partitionnement impactés par l'algorithmique des services du système d'exploitation et leurs implémentations possibles. Pour cela un modèle modulaire de services d'OS simule fonctionnellement et conjointement en SystemC le matériel, les tâches logicielles et le système d'exploitation, répartis sur plusieurs noeuds d'exécution hétérogènes communicants. Ce modèle a permis d'évaluer l'architecture temps-réel idéale d'une application dynamique de vision robotique conjointement à l'exploration des services de gestion d'une zone reconfigurable modélisée. Ce modèle d'OS a aussi été intégré dans un simulateur de MPSoC hétérogène d'une puissance estimé à un Tera opérations par seconde.
|
Page generated in 0.0507 seconds