• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 13
  • 9
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 177
  • 177
  • 54
  • 36
  • 35
  • 33
  • 31
  • 25
  • 25
  • 22
  • 22
  • 20
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Real-time complex light field generation through a multi-core fiber with deep learning

Sun, Jiawei, Wu, Jiachen, Koukourakis, Nektarios, Cao, Liangcai, Kuschmierz, Robert, Czarske, Juergen 08 April 2024 (has links)
The generation of tailored complex light fields with multi-core fiber (MCF) lensless microendoscopes is widely used in biomedicine. However, the computer-generated holograms (CGHs) used for such applications are typically generated by iterative algorithms, which demand high computation effort, limiting advanced applications like fiber-optic cell manipulation. The random and discrete distribution of the fiber cores in an MCF induces strong spatial aliasing to the CGHs, hence, an approach that can rapidly generate tailored CGHs for MCFs is highly demanded. We demonstrate a novel deep neural network—CoreNet, providing accurate tailored CGHs generation for MCFs at a near video rate. The CoreNet is trained by unsupervised learning and speeds up the computation time by two magnitudes with high fidelity light field generation compared to the previously reported CGH algorithms for MCFs. Real-time generated tailored CGHs are on-the-fly loaded to the phase-only spatial light modulator (SLM) for near video-rate complex light fields generation through the MCF microendoscope. This paves the avenue for real-time cell rotation and several further applications that require real-time high-fidelity light delivery in biomedicine.
152

Hardware-in-the-Loop Simulation of Aircraft Actuator

Braun, Robert January 2009 (has links)
<p>Advanced computer simulations will play a more and more important role in future aircraft development and aeronautic research. Hardware-in-the-loop simulations enable examination of single components without the need of a full-scale model of the system. This project investigates the possibility of conducting hardware-in-the-loop simulations using a hydraulic test rig utilizing modern computer equipment. Controllers and models have been built in Simulink and Hopsan. Most hydraulic and mechanical components used in Hopsan have also been translated from Fortran to C and compiled into shared libraries (.dll). This provides an easy way of importing Hopsan models in LabVIEW, which is used to control the test rig. The results have been compared between Hopsan and LabVIEW, and no major differences in the results could be found. Importing Hopsan components to LabVIEW can potentially enable powerful features not available in Hopsan, such as hardware-in-the-loop simulations, multi-core processing and advanced plotting tools. It does however require fast computer systems to achieve real-time speed. The results of this project can provide interesting starting points in the development of the next generation of Hopsan.</p>
153

Hardware-in-the-Loop Simulation of Aircraft Actuator

Braun, Robert January 2009 (has links)
Advanced computer simulations will play a more and more important role in future aircraft development and aeronautic research. Hardware-in-the-loop simulations enable examination of single components without the need of a full-scale model of the system. This project investigates the possibility of conducting hardware-in-the-loop simulations using a hydraulic test rig utilizing modern computer equipment. Controllers and models have been built in Simulink and Hopsan. Most hydraulic and mechanical components used in Hopsan have also been translated from Fortran to C and compiled into shared libraries (.dll). This provides an easy way of importing Hopsan models in LabVIEW, which is used to control the test rig. The results have been compared between Hopsan and LabVIEW, and no major differences in the results could be found. Importing Hopsan components to LabVIEW can potentially enable powerful features not available in Hopsan, such as hardware-in-the-loop simulations, multi-core processing and advanced plotting tools. It does however require fast computer systems to achieve real-time speed. The results of this project can provide interesting starting points in the development of the next generation of Hopsan.
154

Pic-Vert : une implémentation de la méthode particulaire pour architectures multi-coeurs / Pic-Vert : a particle-in-cell implementation for multi-core architectures

Barsamian, Yann 31 October 2018 (has links)
Cette thèse a pour contexte la résolution numérique du système de Vlasov–Poisson (modèle utilisé en physique des plasmas, par exemple dans le cadre du projet ITER) par les méthodes classiques particulaires (PIC pour "Particle-in-Cell") et semi-Lagrangiennes. La contribution principale de notre thèse est une implémentation efficace de la méthode PIC pour architectures multi-coeurs, écrite dans le langage C, dont le nom est Pic-Vert. Notre implémentation (a) atteint un nombre quasi-minimal de transferts mémoires avec la mémoire principale, (b) exploite les instructions vectorielles (SIMD) pour les calculs numériques, et (c) expose une quantité suffisante de parallélisme, en mémoire partagée. Pour mettre notre travail en perspective avec l'état de l'art, nous proposons une métrique permettant de comparer différentes implémentations sur différentes architectures. Notre implémentation est 3 fois plus rapide que d'autres implémentations récentes sur la même architecture (Intel Haswell). / In this thesis, we are interested in solving the Vlasov–Poisson system of equations (useful in the domain of plasma physics, for example within the ITER project), thanks to classical Particle-in-Cell (PIC) and semi-Lagrangian methods. The main contribution of our thesis is an efficient implementation of the PIC method on multi-core architectures, written in C, called Pic-Vert. Our implementation (a) achieves close-to-minimal number of memory transfers with the main memory, (b) exploits SIMD instructions for numerical computations, and (c) exhibits a high degree of shared memory parallelism. To put our work in perspective with respect to the state-of-the-art, we propose a metric to compare the efficiency of different PIC implementations when using different multi-core architectures. Our implementation is 3 times faster than other recent implementations on the same architecture (Intel Haswell).
155

Analyse électrothermique des faisceaux de câbles de puissance : une contribution à l’optimisation des systèmes de distribution d’énergie dans les véhicules routiers à propulsion électrique / Electro-thermal analysis of power cable harnesses : a contribution for the optimization of energy distribution systems in road vehicles with electric drives

Holyk, Christophe 04 December 2014 (has links)
Dans le contexte de la montée des préoccupations écologiques, le développement de véhicules de transports routiers s’oriente vers le développement de véhicules moins polluants à entraînement électrique comme les Véhicules Électriques Hybrides (VEHs) et les Véhicules tout Électriques (VEs). Avec l’augmentation des puissances requises et la réduction de l’espace disponible, la gestion thermique devient une préoccupation de plus en plus importante lors du développement des composants embarqués comme les moteurs/générateurs électriques, onduleurs, batteries et faisceaux électriques. Parmi eux, le faisceau électrique de puissance qui est typiquement composé de câbles électriques, de connecteurs et de boîtes de distribution de puissance ne peut être conçu de manière appropriée qu’à la suite d’une analyse thermique, électrique, chimique et mécanique approfondie.Cette thèse est écrite pour contribuer à l’optimisation de la conception électrothermique de faisceaux de câbles par des simulations afin de réduire la quantité de tests expérimentaux nécessaires pour leur développement. Des modèles théoriques pour la prédiction du comportement électrique et thermique de câbles électriques et des faisceaux de câbles sont passés en revue et adaptés aux exigences automobiles. La validation est accomplie en comparant les résultats de simulation avec ceux d’analyses élément finie (FEA) et de données de mesure. Une partie majeure de cette thèse aborde la simulation thermique de câbles électriques de longueur infinie suspendus dans l’air, prenant en compte les dépendances en température des résistances de conducteurs et la non-linéarité du coefficient de transfert thermique total à la surface du câble. L’influence des courants de blindage et de courants arbitraires dans les conducteurs sur la montée en température des câbles électriques est considérée dans des circuits thermiques équivalents et illustré par des exemples pratiques. / In the context of growing ecological concerns, the development of road transport vehicles moves itself toward the development of less polluting vehicles with electric drives such as Hybrid Electric Vehicles (HEVs) and full Electric Vehicles (EVs). With rising power requirements and reducing available space, thermal management is becoming an increasingly important concern during development of on-board vehicle components such as electric motor(s)/generator(s), power inverter(s), battery pack(s) and cable harnesses. Among them, the cable harness which is typically composed of electrical cables, connectors and power distribution boxes can only be designed properly after a detailed thermal, electrical, chemical and mechanical analysis.This thesis is written to contribute to the optimization of the electro-thermal design of cable harnesses through simulations and reduce the amount of experimental testing needed during their development. Theoretical models for the prediction of the electrical and thermal behavior of electric cables and cable harnesses are reviewed and adapted for automotive requirements. Validation is accomplished by comparing simulation results with Finite Element Analysis (FEA) and measurement data. A major part of this thesis addresses the thermal simulation of electrical cables of infinite length installed in air, taking into account the temperature dependencies of conductor resistances and non-linearity of the total heat transfer coefficient at the cable surface. The influence of shielding currents and arbitrary current loads in the conductors on the temperature rises within electric cables is also considered using thermal ladder networks and illustrated by practical examples. Because shielding currents in vehicles are not only caused by induced currents but also by functional electrical currents generated by low-voltage power sources, new theoretical studies and experimental observations for the estimation of these currents as a function of the vehicle electrical architecture and circuit characteristics are presented. A primary finding reveals that keeping the resistance of grounding connections low compared to that of the shielding connections is an appropriate but expensive means for limiting the transfer of functional currents in the shielding circuits. Finally, a complete and modular model for the prediction of transient temperatures along the length of cable harness sections is developed and validated based on the outcomes of all previous findings.
156

The Thermal-Constrained Real-Time Systems Design on Multi-Core Platforms -- An Analytical Approach

SHA, SHI 21 March 2018 (has links)
Over the past decades, the shrinking transistor size enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Over the past decades, the shrinking transistor size, benefited from the advancement of IC technology, enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Different from many existing DTM heuristics that are based on simple intuitions, we seek to address the thermal problems through a rigorous analytical approach, to achieve the high predictability requirement in real-time system design. In this regard, we have made a number of important contributions. First, we develop a series of lemmas and theorems that are general enough to uncover the fundamental principles and characteristics with regard to the thermal model, peak temperature identification and peak temperature reduction, which are key to thermal-constrained real-time computer system design. Second, we develop a design-time frequency and voltage oscillating approach on multi-core platforms, which can greatly enhance the system throughput and its service capacity. Third, different from the traditional workload balancing approach, we develop a thermal-balancing approach that can substantially improve the energy efficiency and task partitioning feasibility, especially when the system utilization is high or with a tight temperature constraint. The significance of our research is that, not only can our proposed algorithms on throughput maximization and energy conservation outperform existing work significantly as demonstrated in our extensive experimental results, the theoretical results in our research are very general and can greatly benefit other thermal-related research.
157

Controlling execution time variability using COTS for Safety-critical systems

Bin, Jingyi 10 July 2014 (has links) (PDF)
While relying during the last decade on single-core Commercial Off-The-Shelf (COTS) architectures despite their inherent runtime variability, the safety critical industry is now considering a shift to multi-core COTS in order to match the increasing performance requirement. However, the shift to multi-core COTS worsens the runtime variability issue due to the contention on shared hardware resources. Standard techniques to handle this variability such as resource over-provisioning cannot be applied to multi-cores as additional safety margins will offset most if not all the multi-core performance gains. A possible solution would be to capture the behavior of potential contention mechanisms on shared hardware resources relatively to each application co-running on the system. However, the features on contention mechanisms are usually very poorly documented. In this thesis, we introduce measurement techniques based on a set of dedicated stressing benchmarks and architecture hardware monitors to characterize (1) the architecture, by identifying the shared hardware resources and revealing their associated contention mechanisms. (2) the applications, by learning how they behave relatively to shared resources. Based on such information, we propose a technique to estimate the WCET of an application in a pre-determined co-running context by simulating the worst case contention on shared resources produced by the application's co-runners.
158

Optimisation of Performance Metrics of Embedded Hard Real-Time Systems using Software/Hardware Parallelism

Paolillo, Antonio 17 October 2018 (has links)
Optimisation of Performance Metrics of Embedded Hard Real-Time Systems using Software/Hardware Parallelism. Nowadays, embedded systems are part of our daily lives.Some of these systems are called safetycritical and have strong requirements in terms of safety and reliability.Additionally, these systems must have a long autonomy, good performance and minimal costs.Finally, these systems must exhibit predictable behaviour and provide their results within firm deadlines.When these different constraints are combined in the requirement specifications of a modern product, classic design techniques making use of single core platforms are not sufficient.Academic research in the field of real-time embedded systems has produced numerous techniques to exploit the capabilities of modern hardware platforms.These techniques are often based on using parallelism inherently present in modern hardware to improve the system performance while reducing the platform power dissipation.However, very few systems existing on the market are using these state-of-the-art techniques.Moreover, few of these techniques have been validated in the context of practical experiments.In this thesis, we realise the study of operating system level techniques allowing to exploit hardware parallelism through the implementation of parallel software in order to boost the performance of target applications and to reduce the overall system energy consumption while satisfying strict application timing requirements.We detail the theoretical foundations of the ideas applied in the dissertation and validate these ideas through experimental work.To this aim, we use a new Real-Time Operating System kernel written in the context of the creation of a spin-off of the Université libre de Bruxelles.Our experiments are based on the execution of applications on the operating system which run on a real-world platform for embedded systems.Our results show that, compared to traditional design techniques, using parallel and power-aware scheduling techniques in order to exploit hardware and software parallelism allows to execute embedded applications with substantial savings in terms of energy consumption.We present future and ongoing research work that exploit the capabilities of recent embedded platforms.These platforms combine multi-core processors and reconfigurable hardware logic, allowing further improvements in performance and energy consumption. / Optimisation de Métriques de Performances de Systèmes Embarqués Temps Réel Durs par utilisation du Parallélisme Logiciel et Matériel. De nos jours, les systèmes embarqués font partie intégrante de notre quotidien.Certains de ces systèmes, appelés systèmes critiques, sont soumis à de fortes contraintes de fiabilité et de robustesse.De plus, des contraintes de coûts, d’autonomie et de performances s’additionnent à la fiabilité.Enfin, ces systèmes doivent très souvent respecter des délais très stricts de façon prédictible.Lorsque ces différentes contraintes sont combinées dans le cahier de charge d’un produit, les techniques classiques de conception consistant à utiliser un seul cœur d’un processeur ne suffisent plus.La recherche académique dans le domaine des systèmes embarqués temps réel a produit de nombreuses techniques pour exploiter les plate-formes modernes.Ces techniques sont souvent basées sur l’exploitation du parallélisme inhérent au matériel pour améliorer les performances du système et la puissance dissipée par la plate-forme.Cependant, peu de systèmes existant sur le marché exploitent ces techniques de la littérature et peu de ces techniques ont été validées dans le cadre d’expériences pratiques.Dans cette thèse, nous réalisons l’étude des techniques, au niveau du système d’exploitation, permettant l’exploitation du parallélisme matériel par l’implémentation de logiciels parallèles afin de maximiser les performances et réduire l’impact sur l’énergie consommée tout en satisfaisant les contraintes temporelles strictes du cahier de charge applicatif. Nous détaillons les fondements théoriques des idées qui sont appliquées dans la dissertation et nous les validons par des travaux expérimentaux.A ces fins, nous utilisons le nouveau noyau d’un système d’exploitation écrit dans le cadre de la création d’une spin-off de l’Université libre de Bruxelles.Nos expériences, basées sur l’exécution d’applications sur le système d’exploitation qui s’exécute lui-même sur une plate-forme embarquée réelle, montre que l’utilisation de techniques d’ordonnancement exploitant le parallélisme matériel et logiciel permet de larges économies d’énergie consommée lors de l’exécution d’applications embarquées.De futurs travaux en cours de réalisation sont présentés.Ceux-ci exploitent des plate-formes innovantes qui combinent processeurs multi-cœurs et matériel reconfigurable, permettant d’aller encore plus loin dans l’amélioration des performances et les gains énergétiques. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
159

Simulation temps-réel distribuée de modèles numériques : application au groupe motopropulseur / Distributed real-time simulation of numerical models : application to power-train

Ben Khaled-El Feki, Abir 27 May 2014 (has links)
De nos jours, la validation des unités de contrôle électronique ECU se fonde généralement sur la simulationHardware-In-the-Loop où les systèmes physiques qui manquent sont modélisés à l’aide deséquations différentielles hybrides. La complexité croissante de ce type de modèles rend le compromisentre le temps de calcul et la précision de la simulation difficile à satisfaire. Cette thèse étudie et proposedes méthodes d’analyse et d’expérimentation destinées à la co-simulation temps-réel ferme de modèlesdynamiques hybrides. Elle vise notamment à définir des solutions afin d’exploiter plus efficacement leparallélisme fourni par les architectures multi-coeurs en utilisant de nouvelles méthodes et paradigmesde l’allocation des ressources. La première phase de la thèse a étudié la possibilité d’utiliser des méthodesd’intégration numérique permettant d’adapter l’ordre comme la taille du pas de temps ainsi quede détecter les événements et ceci dans le contexte de la co-simulation modulaire avec des contraintestemps-réel faiblement dures. De plus, l’ordre d’exécution des différents modèles a été étudié afin dedémontrer l’influence du respect des dépendances de données entre les modèles couplés sur les résultatsde la simulation. Nous avons proposé pour cet objectif, une nouvelle méthode de co-simulationqui permet le parallélisme complet entre les modèles impliquant une accélération supra-linéaire sanspour autant ajouter des erreurs liées à l’ordre d’exécution. Enfin, les erreurs de retard causées par lataille de pas de communication entre les modèles ont été améliorées grâce à une nouvelle méthoded’extrapolation par contexte des signaux d’entrée. Toutes les approches proposées visent de manièreconstructive à améliorer la vitesse de simulation afin de respecter les contraintes temps-réel, tout engardant la qualité et la précision des résultats de simulation sous contrôle. Ces méthodes ont été validéespar plusieurs essais et expériences sur un modèle de moteur à combustion interne et intégrées àun prototype du logiciel xMOD. / Nowadays the validation of Electronic Control Units ECUs generally relies on Hardware-in-The-Loopsimulation where the lacking physical systems are modeled using hybrid differential equations. Theincreasing complexity of this kind of models makes the trade-off between time efficiency and the simulationaccuracy hard to satisfy. This thesis investigates and proposes some analytical and experimentalmethods towards weakly-hard real-time co-simulation of hybrid dynamical models. It seeks in particularto define solutions in order to exploit more efficiently the parallelism provided by multi-core architecturesusing new methods and paradigms of resource allocation. The first phase of the thesis studied the possibilityof using step-size and order control numerical integration methods with events detection in thecontext of real-time modular co-simulation when the time constraints are considered weakly-hard. Moreover,the execution order of the different models was studied to show the influence of keeping or not thedata dependencies between coupled models on the simulation results. We proposed for this aim a newmethod of co-simulation that allows the full parallelism between models implying supra-linear speed-upswithout adding errors related to their execution order. Finally, the delay errors due to the communicationstep-size between the models were improved thanks to a proposed context-based inputs extrapolation.All proposed approaches target constructively to enhance the simulation speed for the compliance toreal-time constraints while keeping the quality and accuracy of simulation results under control and theyare validated through several test and experiments on an internal combustion engine model and integratedto a prototype version of the xMOD software.
160

Numerical Quality and High Performance In Interval Linear Algebra on Multi-Core Processors / Algèbre linéaire d'intervalles - Qualité Numérique et Hautes Performances sur Processeurs Multi-Cœurs

Theveny, Philippe 31 October 2014 (has links)
L'objet est de comparer des algorithmes de multiplication de matrices à coefficients intervalles et leurs implémentations.Le premier axe est la mesure de la précision numérique. Les précédentes analyses d'erreur se limitent à établir une borne sur la surestimation du rayon du résultat en négligeant les erreurs dues au calcul en virgule flottante. Après examen des différentes possibilités pour quantifier l'erreur d'approximation entre deux intervalles, l'erreur d'arrondi est intégrée dans l'erreur globale. À partir de jeux de données aléatoires, la dispersion expérimentale de l'erreur globale permet d'éclairer l'importance des différentes erreurs (de méthode et d'arrondi) en fonction de plusieurs facteurs : valeur et homogénéité des précisions relatives des entrées, dimensions des matrices, précision de travail. Cette démarche conduit à un nouvel algorithme moins coûteux et tout aussi précis dans certains cas déterminés.Le deuxième axe est d'exploiter le parallélisme des opérations. Les implémentations précédentes se ramènent à des produits de matrices de nombres flottants. Pour contourner les limitations d'une telle approche sur la validité du résultat et sur la capacité à monter en charge, je propose une implémentation par blocs réalisée avec des threads OpenMP qui exécutent des noyaux de calcul utilisant les instructions vectorielles. L'analyse des temps d'exécution sur une machine de 4 octo-coeurs montre que les coûts de calcul sont du même ordre de grandeur sur des matrices intervalles et numériques de même dimension et que l'implémentation par bloc passe mieux à l'échelle que l'implémentation avec plusieurs appels aux routines BLAS. / This work aims at determining suitable scopes for several algorithms of interval matrices multiplication.First, we quantify the numerical quality. Former error analyses of interval matrix products establish bounds on the radius overestimation by neglecting the roundoff error. We discuss here several possible measures for interval approximations. We then bound the roundoff error and compare experimentally this bound with the global error distribution on several random data sets. This approach enlightens the relative importance of the roundoff and arithmetic errors depending on the value and homogeneity of relative accuracies of inputs, on the matrix dimension, and on the working precision. This also leads to a new algorithm that is cheaper yet as accurate as previous ones under well-identified conditions.Second, we exploit the parallelism of linear algebra. Previous implementations use calls to BLAS routines on numerical matrices. We show that this may lead to wrong interval results and also restrict the scalability of the performance when the core count increases. To overcome these problems, we implement a blocking version with OpenMP threads executing block kernels with vector instructions. The timings on a 4-octo-core machine show that this implementation is more scalable than the BLAS one and that the cost of numerical and interval matrix products are comparable.

Page generated in 0.0345 seconds