• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 7
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 45
  • 45
  • 12
  • 10
  • 10
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Statistical Yield Analysis and Design for Nanometer VLSI

Jaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
32

Statistical Yield Analysis and Design for Nanometer VLSI

Jaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
33

The Speed of Clouds : Utilizing Adaptive Sampling to Optimize a Real-Time Volumetric Cloud Renderer / Hastigheten av moln : Användning av adaptiv sampling för att optimera en realtidsrendering av volymetriska moln

Hydén, Emrik January 2023 (has links)
Volumetric clouds are often used in video games in order to improve the realism or graphical quality of the game. However, in order to achieve real-time rendered clouds, optimizations have to be implemented as part of the rendering algorithm. These kinds of optimizations improve the performance, but can also have a negative impact on the visual quality of the clouds. This thesis investigates the use of bilinear interpolation for the purpose of improving the performance of a volumetric cloud renderer, while trying to avoid any substantial reduction in visual quality. This is extended by looking at the effect of adaptively sampling the pixel colors. The renderer itself is created in Unity3D using a ray marching algorithm. As part of the literature study, this research also explores different ways of measuring visual quality within real-time rendering. As a result of this, the thesis uses the Structural Similarity Index Measure to measure the visual quality. The research found that utilizing bilinear interpolation to ray march every eighth pixel results in a performance gain of 45%. However, it also reduces the visual quality of the volumetric clouds. This is counteracted by using adaptive sampling to interpolate only where the standard deviation of pixel colors is below a threshold. We cannot, however, determine the optimal value of this parameter, since it depends on the requirements of the renderer. Instead, it has to be determined on a case-by-case basis. / Volymetriska moln används i spel för att uppnå realism och förbättra den grafiska kvaliteten. Men för att uppnå realtidsrendering så måste optimeringar göras. Dessa typer av optimeringar förbättrar prestandan av programmet, men kan också försämra den visuella kvalteten. Den här studien undersöker hur en optimering baserad på bilinjär interpolering kan användas för att förbättra prestandan av volymetriska moln, utan att försämra den visuella kvaliteten i någon större utsträckning. Studien tittar även på hur adaptiv sampling av pixlarna påverkar optimeringen. För att utföra detta renderas molnen i Unity3D med hjälp av en ray marching-algoritm. Som del av litteraturstudien utforskas även olika sätt att evaluera visuell kvalitet inom realtidsrendering. Utifrån denna använder studien måttet Structural Similarity Index Measure för att mäta visuell kvalitet. Studien fann att den bilinjära interpoleringen resulterade i att prestandan ökade med 45% när endast var åttonde pixel är beräknad med ray marching, och resten interpoleras. Dock reduceras även den visuella kvaliteten av molnen. Detta kan motverkas med hjälp av adaptiv sampling. Då interpoleras endast pixlar där standardavvikelsen av de kringliggande pixlarna är under ett fördefinierat värde. Vi kan däremot inte definiera ett universiellt optimalt värde på detta värde. Det beror på att det optimala värdet beror på kraven vi har på programmet. Dessa kan variera från program till program. Därför måste detta bestämmas individuellt för varje program.
34

Une approche fréquentielle pratique pour l'échantillonnage adaptatif en espace image

Dubouchet, Renaud Adrien 10 1900 (has links)
En synthèse d'images réalistes, l'intensité finale d'un pixel est calculée en estimant une intégrale de rendu multi-dimensionnelle. Une large portion de la recherche menée dans ce domaine cherche à trouver de nouvelles techniques afin de réduire le coût de calcul du rendu tout en préservant la fidelité et l'exactitude des images résultantes. En tentant de réduire les coûts de calcul afin d'approcher le rendu en temps réel, certains effets réalistes complexes sont souvent laissés de côté ou remplacés par des astuces ingénieuses mais mathématiquement incorrectes. Afin d'accélerer le rendu, plusieurs avenues de travail ont soit adressé directement le calcul de pixels individuels en améliorant les routines d'intégration numérique sous-jacentes; ou ont cherché à amortir le coût par région d'image en utilisant des méthodes adaptatives basées sur des modèles prédictifs du transport de la lumière. L'objectif de ce mémoire, et de l'article résultant, est de se baser sur une méthode de ce dernier type[Durand2005], et de faire progresser la recherche dans le domaine du rendu réaliste adaptatif rapide utilisant une analyse du transport de la lumière basée sur la théorie de Fourier afin de guider et prioriser le lancer de rayons. Nous proposons une approche d'échantillonnage et de reconstruction adaptative pour le rendu de scènes animées illuminées par cartes d'environnement, permettant la reconstruction d'effets tels que les ombres et les réflexions de tous les niveaux fréquentiels, tout en préservant la cohérence temporelle. / In realistic image synthesis, a pixel's final intensity is computed by estimating a multi-dimensional shading integral. A large part of the research in this domain is thus aimed at finding new techniques to reduce the computational cost of rendering while preserving the fidelity and correctness of the resulting images. When trying to reduce rendering costs to approach real-time computation, complex realistic effects are often left aside or replaced by clever but mathematically incorrect tricks. To accelerate rendering, previous directions of work have either addressed the computation of individual pixels by improving the underlying numerical integration routines; or have sought to amortize the computation across regions of an image using adaptive methods based on predictive models of light transport. This thesis' - and resulting paper's - objective is to build upon the latter of the aforementioned classes of methods[Durand2005], and foray into fast adaptive rendering techniques using frequency-based light transport analysis to efficiently guide and prioritize ray tracing. We thus propose an adaptive sampling and reconstruction approach to render animated scenes lit by environment lighting and faithfully reconstruct all-frequency shading effects such as shadows and reflections while preserving temporal coherency.
35

Spectroscopie adaptative à deux peignes de fréquences / Adaptive dual-comb spectroscopy

Poisson, Antonin 05 July 2013 (has links)
La spectroscopie par transformation de Fourier par peignes de fréquences femtosecondes tire parti d’un interféromètre sans partie mobile. Il mesure les interférences entre deux peignes de fréquences, sources lasers à large bande spectrale constituée de raies fines et équidistantes. Il améliore significativement le temps de mesure et la limite de résolution spectrale des spectromètres de Fourier. Néanmoins, les conditions sur la stabilité à court terme des peignes ne peuvent pas être remplies par les techniques d’asservissement classique. Jusqu’à présent, aucun spectre de qualité n’a pu être mesuré avec un très faible temps d’acquisition. Cette thèse présente le développement d’une méthode de correction en temps réel capable de compenser les fluctuations résiduelles des peignes et de restituer des spectres sans artefacts. La méthode, analogique, ne nécessite aucun asservissement ou traitement informatique a posteriori. Ses performances sont démontrées dans le proche infrarouge (1,5 µm) et le visible (520 nm), à l’aide d’oscillateurs femtosecondes fibrés. Des spectres moléculaires couvrant 12 THz sont mesurés en 500 µs à limite de résolution Doppler. Ils sont en excellent accord avec les données de la littérature. Pour la première fois, le plein potentiel de la spectroscopie de Fourier par peignes de fréquences est démontré. Le domaine de l’infrarouge moyen est la région de prédilection de la spectroscopie moléculaire car la plupart des molécules y présentent des absorptions fortes et caractéristiques. Étendre la spectroscopie par peignes de fréquences à cette région est donc l’objectif suivant à atteindre. Dans cette optique, un peigne émettant autour de 3 µm est caractérisé. Il est basé sur la conversion non-linéaire par différence de fréquences d’un oscillateur à erbium élargi spectralement par une fibre fortement non-linéaire. / Dual-comb Fourier-transform spectroscopy takes advantage of an interferometer without moving parts. Interferences pattern between two femtosecond frequency combs, broadband laser sources whose spectra consist of evenly-spaced narrow lines, is measured. The measurement time and the spectral resolution are significantly improved compared to traditional Fourier spectrometers. However, the required short-term stability of the combs cannot be achieved by classic locking methods. Until now, no high-quality spectra could be recorded within a very short acquisition time. This thesis reports on the development of a real-time correction method able to compensate for the combs’ residual fluctuations and to restore non-distorted spectra. This analog technique does not require any locking system or a posteriori calculation. Its performance is demonstrated in the near-infrared (1.5 µm) and in the visible (520 nm) with fiber-based femtosecond lasers. Doppler-limited molecular spectra spanning 12 THz are measured within 500 µs. They are in excellent agreement with databases. For the first time, the full potential of dual-comb spectroscopy is demonstrated. The mid-infrared region is an attractive spectral range for molecular spectroscopy due to the molecules’ strong and characteristic absorptions. Therefore, extending dual-comb spectroscopy to this region is the next goal to achieve. Toward this goal, a comb emitting around 3 µm is characterized. It is based on the non-linear difference frequency generation from an erbium oscillator spectrally broadened with a highly non-linear fiber.
36

In-network database query processing for wireless sensor networks

Al-Hoqani, Noura Y. S. January 2018 (has links)
In the past research, smart sensor devices have become mature enough for large, distributed networks of such sensors to start to be deployed. Such networks can include tens or hundreds of independent nodes that can perform their functions without human interactions such as recharging of batteries, the configuration of network routes and others. Each of the sensors in the wireless sensor network is considered as microsystem, which consists of memory, processor, transducers and low bandwidth as well as a low range radio transceiver. This study investigates an adaptive sampling strategy for WSS aimed at reducing the number of data samples by sensing data only when a significant change in these processes is detected. This detection strategy is based on an extension to Holt's Method and statistical model. To investigate this strategy, the water consumption in a household is used as a case study. A query distribution approach is proposed, which is presented in detail in chapter 5. Our developed wireless sensor query engine is programmed on Sensinode testbed cc2430. The implemented model used on the wireless sensor platform and the architecture of the model is presented in chapters six, seven, and eight. This thesis presents a contribution by designing the experimental simulation setup and by developing the required database interface GUI sensing system, which enables the end user to send the inquiries to the sensor s network whenever needed, the On-Demand Query Sensing system ODQS is enhanced with a probabilistic model for the purpose of sensing only when the system is insufficient to answer the user queries. Moreover, a dynamic aggregation methodology is integrated so as to make the system more adaptive to query message costs. Dynamic on-demand approach for aggregated queries is implemented, based in a wireless sensor network by integrating the dynamic programming technique for the most optimal query decision, the optimality factor in our experiment is the query cost. In-network query processing of wireless sensor networks is discussed in detail in order to develop a more energy efficient approach to query processing. Initially, a survey of the research on existing WSN query processing approaches is presented. Building on this background, novel primary achievements includes an adaptive sampling mechanism and a dynamic query optimiser. These new approaches are extremely helpful when existing statistics are not sufficient to generate an optimal plan. There are two distinct aspects in query processing optimisation; query dynamic adaptive plans, which focus on improving the initial execution of a query, and dynamic adaptive statistics, which provide the best query execution plan to improve subsequent executions of the aggregation of on-demand queries requested by multiple end-users. In-network query processing is attractive to researchers developing user-friendly sensing systems. Since the sensors are a limited resource and battery powered devices, more robust features are recommended to limit the communication access to the sensor nodes in order to maximise the sensor lifetime. For this reason, a new architecture that combines a probability modelling technique with dynamic programming (DP) query processing to optimise the communication cost of queries is proposed. In this thesis, a dynamic technique to enhance the query engine for the interactive sensing system interface is developed. The probability technique is responsible for reducing communication costs for each query executed outside the wireless sensor networks. As remote sensors have limited resources and rely on battery power, control strategies should limit communication access to sensor nodes to maximise battery life. We propose an energy-efficient data acquisition system to extend the battery life of nodes in wireless sensor networks. The system considers a graph-based network structure, evaluates multiple query execution plans, and selects the best plan with the lowest cost obtained from an energy consumption model. Also, a genetic algorithm is used to analyse the performance of the approach. Experimental testing are provided to demonstrate the proposed on-demand sensing system capabilities to successfully predict the query answer injected by the on-demand sensing system end-user based-on a sensor network architecture and input query statement attributes and the query engine ability to determine the best and close to the optimal execution plan, given specific constraints of these query attributes . As a result of the above, the thesis contributes to the state-of-art in a network distributed wireless sensor network query design, implementation, analysis, evaluation, performance and optimisation.
37

Generalized Sampling-Based Feedback Motion Planners

Kumar, Sandip 2011 December 1900 (has links)
The motion planning problem can be formulated as a Markov decision process (MDP), if the uncertainties in the robot motion and environments can be modeled probabilistically. The complexity of solving these MDPs grow exponentially as the dimension of the problem increases and hence, it is nearly impossible to solve the problem even without constraints. Using hierarchical methods, these MDPs can be transformed into a semi-Markov decision process (SMDP) which only needs to be solved at certain landmark states. In the deterministic robotics motion planning community, sampling based algorithms like probabilistic roadmaps (PRM) and rapidly exploring random trees (RRTs) have been successful in solving very high dimensional deterministic problem. However they are not robust to system with uncertainties in the system dynamics and hence, one of the primary objective of this work is to generalize PRM/RRT to solve motion planning with uncertainty. We first present generalizations of randomized sampling based algorithms PRM and RRT, to incorporate the process uncertainty, and obstacle location uncertainty, termed as "generalized PRM" (GPRM) and "generalized RRT" (GRRT). The controllers used at the lower level of these planners are feedback controllers which ensure convergence of trajectories while mitigating the effects of process uncertainty. The results indicate that the algorithms solve the motion planning problem for a single agent in continuous state/control spaces in the presence of process uncertainty, and constraints such as obstacles and other state/input constraints. Secondly, a novel adaptive sampling technique, termed as "adaptive GPRM" (AGPRM), is proposed for these generalized planners to increase the efficiency and overall success probability of these planners. It was implemented on high-dimensional robot n-link manipulators, with up to 8 links, i.e. in a 16-dimensional state-space. The results demonstrate the ability of the proposed algorithm to handle the motion planning problem for highly non-linear systems in very high-dimensional state space. Finally, a solution methodology, termed the "multi-agent AGPRM" (MAGPRM), is proposed to solve the multi-agent motion planning problem under uncertainty. The technique uses a existing solution technique to the multiple traveling salesman problem (MTSP) in conjunction with GPRM. For real-time implementation, an ?inter-agent collision detection and avoidance? module was designed which ensures that no two agents collide at any time-step. Algorithm was tested on teams of homogeneous and heterogeneous agents in cluttered obstacle space and the algorithm demonstrate the ability to handle such problems in continuous state/control spaces in presence of process uncertainty.
38

Reconstruction of Structured Functions From Sparse Fourier Data

Wischerhoff, Marius 14 January 2015 (has links)
No description available.
39

Big data management for periodic wireless sensor networks / Gestion de données volumineuses dans les réseaux de capteurs périodiques

Medlej, Maguy 30 June 2014 (has links)
Les recherches présentées dans ce mémoire s’inscrivent dans le cadre des réseaux decapteurs périodiques. Elles portent sur l’étude et la mise en oeuvre d’algorithmes et de protocolesdistribués dédiés à la gestion de données volumineuses, en particulier : la collecte, l’agrégation etla fouille de données. L’approche de la collecte de données permet à chaque noeud d’adapter sontaux d’échantillonnage à l’évolution dynamique de l’environnement. Par ce modèle le suréchantillonnageest réduit et par conséquent la quantité d’énergie consommée. Elle est basée surl’étude de la dépendance de la variance de mesures captées pendant une même période voirpendant plusieurs périodes différentes. Ensuite, pour sauvegarder plus de l’énergie, un modèled’adpatation de vitesse de collecte de données est étudié. Ce modèle est basé sur les courbes debézier en tenant compte des exigences des applications. Dans un second lieu, nous étudions unetechnique pour la réduction de la taille de données massive qui est l’agrégation de données. Lebut est d’identifier tous les noeuds voisins qui génèrent des séries de données similaires. Cetteméthode est basée sur les fonctions de similarité entre les ensembles de mesures et un modèle defiltrage par fréquence. La troisième partie est consacrée à la fouille de données. Nous proposonsune adaptation de l’approche k-means clustering pour classifier les données en clusters similaires,d’une manière à l’appliquer juste sur les préfixes des séries de mesures au lieu de l’appliquer auxséries complètes. Enfin, toutes les approches proposées ont fait l’objet d’études de performancesapprofondies au travers de simulation (OMNeT++) et comparées aux approches existantes dans lalittérature. / This thesis proposes novel big data management techniques for periodic sensor networksembracing the limitations imposed by wsn and the nature of sensor data. First, we proposed anadaptive sampling approach for periodic data collection allowing each sensor node to adapt itssampling rates to the physical changing dynamics. It is based on the dependence of conditionalvariance of measurements over time. Then, we propose a multiple level activity model that usesbehavioral functions modeled by modified Bezier curves to define application classes and allowfor sampling adaptive rate. Moving forward, we shift gears to address the periodic dataaggregation on the level of sensor node data. For this purpose, we introduced two tree-based bilevelperiodic data aggregation techniques for periodic sensor networks. The first one look on aperiodic basis at each data measured at the first tier then, clean it periodically while conservingthe number of occurrences of each measure captured. Secondly, data aggregation is performedbetween groups of nodes on the level of the aggregator while preserving the quality of theinformation. We proposed a new data aggregation approach aiming to identify near duplicatenodes that generate similar sets of collected data in periodic applications. We suggested the prefixfiltering approach to optimize the computation of similarity values and we defined a new filteringtechnique based on the quality of information to overcome the data latency challenge. Last butnot least, we propose a new data mining method depending on the existing K-means clusteringalgorithm to mine the aggregated data and overcome the high computational cost. We developeda new multilevel optimized version of « k-means » based on prefix filtering technique. At the end,all the proposed approaches for data management in periodic sensor networks are validatedthrough simulation results based on real data generated by periodic wireless sensor network.
40

Optimisation de forme numérique de problèmes multiphysiques et multiéchelles : application aux échangeurs de chaleur / Shape optimization of multi-scales and multi-physics problems : application to heat exchangers

Mastrippolito, Franck 14 December 2018 (has links)
Les échangeurs de chaleur sont utilisés dans de nombreux secteurs industriels. L'optimisation de leurs performances est donc de première importance pour réduire la consommation énergétique. Le comportement d'un échangeur est intrinsèquement multiéchelle : l'échelle locale de l'intensification des phénomènes de transfert thermique côtoie une échelle plus globale où interviennent des phénomènes de distribution de débit. Un échangeur de chaleur est également le siège de différents phénomènes physiques, tels que la mécanique des fluides, la thermique et l'encrassement. Les présents travaux proposent une méthode d'optimisation multiobjectif de la forme des échangeurs, robuste, pouvant traiter les aspects multiéchelles et multiphysiques et applicable dans un contexte industriel. Les performances de l'échangeur sont évaluées par des simulations de mécanique des fluides numérique (CFD) et par des méthodes globales (є-NUT). Suite à une étude bibliographique, une méthode de métamodélisation par krigeage associée à un algorithme génétique ont été retenus. Des méthodes de visualisation adaptées (clustering et Self-Organizing Maps) sont utilisées pour analyser les résultats. Le métamodèle permet d'approcher la réponse d'un simulateur (CFD) et d'en fournir une prédiction dont l'interrogation est peu onéreuse. Le krigeage permet de prendre en compte une discontinuité et des perturbations de la réponse du simulateur par l'ajout d'un effet de pépite. Il permet également l'utilisation de stratégies d'enrichissement construisant des approximations précises à moindre coût. Cette méthode est appliquée à différentes configurations représentatives du comportement de l'échangeur, permettant de s'assurer de sa robustesse lorsque le simulateur change, lorsque l'aspect multiéchelle est pris en compte ou lorsque une physique d'encrassement est considérée. Il a été établi que l'étape de métamodélisation assure la robustesse de la méthode et l'intégration de l'aspect multiéchelle. Elle permet aussi de construire des corrélations à l'échelle locale qui sont ensuite utilisées pour déterminer les performances globales de l'échangeur. Dans un contexte industriel, les méthodes d'analyse permettent de mettre en évidence un nombre fini de formes réalisant un compromis des fonctions objectif antagonistes. / Heat exchangers are used in many industrial applications. Optimizing their performances is a key point to improve energy efficiency. Heat exchanger behaviour is a multi-scale issue where local scale enhancement mechanisms coexist with global scale distribution ones. It is also multi-physics such as fluid mecanics, heat transfer and fouling phenomenons appear. The present work deals with multi-objective shape optimization of heat echanger. The proposed method is sufficiently robust to address multi-scale and multi-physics issues and allows industrial applications. Heat exchanger performances are evaluated using computational fluid dynamics (CFD) simulations and global methods (є-NUT). The optimization tools are a genetic algorithm coupled with kriging-based metamodelling. Clustering and Self-Organizing Maps (SOM) are used to analyse the optimization results. A metamodel builts an approximation of a simulator response (CFD) whose evaluation cost is reduced to be used with the genetic algorithm. Kriging can address discontinuities or perturbations of the response by introducing a nugget effect. Adaptive sampling is used to built cheap and precise approximation. The present optimization method is applied to different configurations which are representative of the heat exchanger behaviour for both multi-scale and multi-physics (fouling) aspects. Results show that metamodelling is a key point of the method, ensuring the robustness and the versatility of the optimisation process. Also, it allows to built correlations of the local scale used to determine the global performances of the heat exchanger. Clustering and SOM highlight a finite number of shapes, which represent a compromise between antagonist objective functions, directly usable in an industrial context.

Page generated in 0.069 seconds