Spelling suggestions: "subject:"aptimization techniques"" "subject:"anoptimization techniques""
21 |
Optimalizace nastavení závodního vozu simulátoru TORCS / Optimization of a Racing Car Setup within TORCS SimulatorSrnec, Pavel January 2012 (has links)
This master's thesis is about nature optimalization technigues. Evolution algortihms together with main thesis topic, Particle Swarm Optimization, is introduced in the following chapter. Car setup and simulator TORCS are introduced in next chapter. Design and implementation are introduced in next chapters. Destination of t master's thesis is finding optimal car setups for different curcuits.
|
22 |
Antenna design using optimization techniques over various computaional electromagnetics. Antenna design structures using genetic algorithm, Particle Swarm and Firefly algorithms optimization methods applied on several electromagnetics numerical solutions and applications including antenna measurements and comparisonsAbdussalam, Fathi M.A. January 2018 (has links)
Dealing with the electromagnetic issue might bring a sort of discontinuous and nondifferentiable
regions. Thus, it is of great interest to implement an appropriate optimisation
approach, which can preserve the computational resources and come up with a global
optimum. While not being trapped in local optima, as well as the feasibility to overcome some
other matters such as nonlinear and phenomena of discontinuous with a large number of
variables.
Problems such as lengthy computation time, constraints put forward for antenna
requirements and demand for large computer memory, are very common in the analysis due
to the increased interests in tackling high-scale, more complex and higher-dimensional
problems. On the other side, demands for even more accurate results always expand
constantly. In the context of this statement, it is very important to find out how the recently
developed optimization roles can contribute to the solution of the aforementioned problems.
Thereafter, the key goals of this work are to model, study and design low profile antennas for
wireless and mobile communications applications using optimization process over a
computational electromagnetics numerical solution. The numerical solution method could be
performed over one or hybrid methods subjective to the design antenna requirements and
its environment.
Firstly, the thesis presents the design and modelling concept of small uni-planer Ultra-
Wideband antenna. The fitness functions and the geometrical antenna elements required for
such design are considered. Two antennas are designed, implemented and measured. The
computed and measured outcomes are found in reasonable agreement. Secondly, the work
is also addressed on how the resonance modes of microstrip patches could be performed
using the method of Moments. Results have been shown on how the modes could be
adjusted using MoM. Finally, the design implications of balanced structure for mobile
handsets covering LTE standards 698-748 MHz and 2500-2690 MHz are explored through
using firefly algorithm method. The optimised balanced antenna exhibits reasonable
matching performance including near-omnidirectional radiations over the dual desirable
operating bands with reduced EMF, which leads to a great immunity improvement towards
the hand-held. / General Secretariat of Education and Scientific Research Libya
|
23 |
Ant Colony Optimization Technique to Solve Min-Max MultiDepot Vehicle Routing ProblemVenkata Narasimha, Koushik Srinath January 2011 (has links)
No description available.
|
24 |
Méthodologie d’analyse de fiabilité basée sur des techniques heuristiques d’optimisation et modèles sans maillage : applications aux systèmes mécaniques / Reliability analysis methodology based on heuristic optimization techniques and non-mesh models : applications to mechanical systemsRojas, Jhojan Enrique 04 April 2008 (has links)
Les projets d'Ingénierie Structurale doivent s’adapter aux critères de performance, de sécurité, de fonctionnalité, de durabilité et autres, établis dans la phase d’avant-projet. Traditionnellement, les projets utilisent des informations de nature déterministe comme les dimensions, les propriétés des matériaux et les charges externes. Toutefois, la modélisation des systèmes structuraux complexes implique le traitement des différents types et niveaux d'incertitudes. Dans ce sens, la prévision du comportement doit être préférablement faite en termes de probabilités puisque l'estimation de la probabilité de succès d'un certain critère est une nécessité primaire dans l’Ingénierie Structurale. Ainsi, la fiabilité est la probabilité rapportée à la parfaite opération d'un système structural donné durant un certain temps en des conditions normales d'opération pour trouver le meilleur compromis entre coût et sécurité pour l’élaboration des projets. Visant à pallier les désavantagés des méthodes traditionnelles FORM et SORM (First and Second Order Reliability Method), cette thèse propose une méthode d’analyse de fiabilité basée sur des techniques d’optimisation heuristiques (HBRM, Heuristic-based Reliability Method). Les méthodes heuristiques d’optimisation utilisées par cette méthode sont : Algorithmes Génétiques (Genetic Algorithms), Optimisation par Essaims Particulaires (Particle Swarm Optimisation) et Optimisation par Colonie de Fourmis (Ant Colony Optimization). La méthode HBRM ne requiert aucune estimation initiale de la solution et opère selon le principe de la recherche multi-directionnelle, sans besoin de calculer les dérivées partielles de la fonction d’état limite par rapport aux variables aléatoires. L’évaluation des fonctions d’état limite est réalisée en utilisant modèles analytiques, semi analytiques et numériques. Dans ce but, la mise en oeuvre de la méthode de Ritz (via MATLAB®), la méthode des éléments finis (via MATLAB® et ANSYS®) et la méthode sans maillage de Galerkin (Element-free Galerkin sous MATLAB®) a été nécessaire. La combinaison d’analyse de fiabilité, des méthodes d’optimisation et méthodes de modélisation, ci-dessus mentionnées, configure la méthodologie de conception fiabiliste proposée dans ce mémoire. L’utilisation de différentes méthodes de modélisation et d’optimisation a eu pour objectif de mettre en évidence leurs avantages et désavantages pour des applications spécifiques, ainsi pour démontrer l’applicabilité et la robustesse de la méthodologie de conception fiabiliste en utilisant ces techniques numériques. Ce qui a été possible grâce aux bons résultats trouvés dans la plupart des applications. Dans ce sens, des applications uni, bi et tridimensionnelles en statique, stabilité et dynamique des structures explorent l’évaluation explicite et implicite des fonctions d’état limite de plusieurs variables aléatoires. Procédures de validation déterministe et analyses stochastiques, et la méthode de perturbation de Muscolino, donnent les bases de l’analyse de fiabilité des applications en problèmes d’interaction fluide-structure bi et tridimensionnelles. La méthodologie est particulièrement appliquée à une structure industrielle. Résultats de applications uni et bidimensionnelles aux matériaux composites stratifiés, modélisés par la méthode EFG sont comparés avec les obtenus par éléments finis. A la fin de la thèse, une extension de la méthodologie à l’optimisation fiabiliste est proposée à travers la méthode des facteurs optimaux de sûreté. Pour cela, sont présentes des applications pour la minimisation du poids, en exigent un indice de fiabilité cible, aux systèmes modélisés par la méthode de EF et par la méthode EFG. / Structural Engineering designs must be adapted to satisfy performance criteria such as safety, functionality, durability and so on, generally established in pre-design phase. Traditionally, engineering designs use deterministic information about dimensions, material properties and external loads. However, the structural behaviour of the complex models needs to take into account different kinds and levels of uncertainties. In this sense, this analysis has to be made preferably in terms of probabilities since the estimate the probability of failure is crucial in Structural Engineering. Hence, reliability is the probability related to the perfect operation of a structural system throughout its functional lifetime; considering normal operation conditions. A major interest of reliability analysis is to find the best compromise between cost and safety. Aiming to eliminate main difficulties of traditional reliability methods such as First and Second Order Reliability Method (FORM and SORM, respectively) this work proposes the so-called Heuristic-based Reliability Method (HBRM). The heuristic optimization techniques used in this method are: Genetic Algorithms, Particle Swarm Optimization and Ant Colony Optimization. The HBRM does not require initial guess of design solution because it’s based on multidirectional research. Moreover, HBRM doesn’t need to compute the partial derivatives of the limit state function with respect to the random variables. The evaluation of these functions is carried out using analytical, semi analytical and numerical models. To this purpose were carried out the following approaches: Ritz method (using MATLAB®), finite element method (through MATLAB® and ANSYS®) and Element-free Galerkin method (via MATLAB®). The combination of these reliability analyses, optimization procedures and modelling methods configures the design based reliability methodology proposed in this work. The previously cited numerical tools were used to evaluate its advantages and disadvantages for specific applications and to demonstrate the applicability and robustness of this alternative approach. Good agreement was observed between the results of bi and three-dimensional applications in statics, stability and dynamics. These numerical examples explore explicit and implicit multi limit state functions for several random variables. Deterministic validation and stochastic analyses lied to Muscolino perturbation method give the bases for reliability analysis in 2-D and 3-D fluidstructure interaction problems. This methodology is applied to an industrial structure lied to a modal synthesis. The results of laminated composite plates modelled by the EFG method are compared with their counterparts obtained by finite elements. Finally, an extension in reliability based design optimization is proposed using the optimal safety factors method. Therefore, numerical applications that perform weight minimization while taking into account a target reliability index using mesh-based and meshless models are proposed. / Os projectos de Engenharia Estrutural devem se adaptar a critérios de desempenho, segurança, funcionalidade, durabilidade e outros, estabelecidos na fase de anteprojeto. Tradicionalmente, os projectos utilizam informações de natureza deterministica nas dimensões, propriedades dos materiais e carregamentos externos. No entanto, a modelagem de sistemas complexos implica o tratamento de diferentes tipos e níveis de incertezas. Neste sentido, a previsão do comportamento deve preferivelmente ser realizada em termos de probabilidades dado que a estimativa da probabilidade de sucesso de um critério é uma necessidade primária na Engenharia Estrutural. Assim, a confiabilidade é a probabilidade relacionada à perfeita operação de um sistema estrutural durante um determinado tempo em condições normais de operação. O principal objetivo desta análise é encontrar o melhor compromisso entre custo e segurança. Visando a paliar as principais desvantagens dos métodos tradicionais FORM e SORM (First and Second Order Reliability Method), esta tese propõe um método de análise de confiabilidade baseado em técnicas de optimização heurísticas denominado HBRM (Heuristic-based Reliability Method). Os métodos heurísticos de otimização utilizados por este método são: Algoritmos Genéticos (Genetic Algorithms), Optimização por Bandos Particulares (Particle Swarm Optimisation) e Optimização por Colónia de Formigas (Ant Colony Optimization). O método HBRM não requer de uma estimativa inicial da solução e opera de acordo com o princípio de busca multidirecional, sem efetuar o cálculo de derivadas parciais da função de estado limite em relação às variáveis aleatórias. A avaliação das funções de estado limite é realizada utilizando modelos analíticos, semi analíticos e numéricos. Com este fim, a implementação do método de Ritz (via MATLAB®), o método dos elementos terminados (via MATLAB® e ANSYS®) e o método sem malha de Galerkin (Element-free Galerkin via MATLAB®) foi necessária. A combinação da análise de confiabilidade, os métodos de optimização e métodos de modelagem, acima mencionados, configura a metodologia de projeto proposta nesta tese. A utilização de diferentes métodos de modelagem e de otimização teve por objetivo destacar as suas vantagens e desvantagens em aplicações específicas, assim como demonstrar a aplicabilidade e a robustez da metodologia de análise de confiabilidade utilizando estas técnicas numéricas. Isto foi possível graças aos bons resultados encontrados na maior parte das aplicações. As aplicações foram uni, bi e tridimensionais em estática, estabilidade e dinâmica de estruturas, as quais exploram a avaliação explícita e implícita de funções de estado limite de várias variáveis aleatórias. Procedimentos de validação déterministica e de análises estocásticas, aplicando o método de perturbação de Muscolino, fornecem as bases da análise de confiabilidade nas aplicações de problemas de iteração fluído-estrutura bi e tridimensionais. A metodologia é testada com uma estrutura industrial. Resultados de aplicações bidimensionais em estratificados compostos, modelados pelo método EFG são comparados com os obtidos por elementos finitos. No fim da tese, uma extensão da metodologia à optimização baseada em confiabilidade é proposta aplicando o método dos factores óptimos de segurança. Finalmente são apresentadas as aplicações para a minimização do peso em sistemas modelados pelo método de EF e o método EFG que exigem um índice de confiabilidade alvo.
|
25 |
AI-Assisted Optimization Framework for Advanced EM ProblemsRosatti, Pietro 02 July 2024 (has links)
This thesis concerns the study, development and analysis of innovative artificial intelligence (AI)-driven optimization techniques within the System-by-Design (SbD) framework aimed at efficiently addressing the computational complexity inherent in advanced electromagnetic (EM) problems. By leveraging the available a-priori information as well as the proper integration of machine learning (ML) techniques with intelligent exploration strategies, the SbD paradigm enables the effective and reliable solution of the EM problem at hand, with user-defined performance and in a reasonable amount of time. The flexibility of the AI-driven SbD framework is demonstrated in practice with the implementation of two solution strategies to address the fully non-linear inverse scattering problem (ISP) for the detection and imaging of buried objects in ground penetrating radar (GPR)-based applications, and to address the design and optimization of mm-wave automotive radars that comply multiple challenging and contrasting requirements. A comprehensive set of numerical experiments is reported to demonstrate the efficacy and computational efficiency of the SbD-based optimization techniques in solving complex EM problems.
|
26 |
Engineering the near field of radiating systems at millimeter waves : from theory to applications / Manipulation du champ proche des systèmes rayonnants en ondes millimétriques : théorie et applicationsIliopoulos, Ioannis 20 December 2017 (has links)
L'objectif général est de développer un nouvel outil numérique dédié à la focalisation en 3D de l'énergie en zone de champ très proche par un système antennaire. Cet outil permettra de définir la distribution spatiale complexe des champs dans l'ouverture rayonnante afin de focaliser l'énergie sur un volume quelconque en zone de champ réactif. L'hybridation de cet outil avec un code de calcul dédié à l'analyse rapide d‘antennes SIW par la méthode des moments permettra de synthétiser une antenne SIW ad-hoc. Les structures antennaires sélectionnées seront planaires comme par exemple les antennes RLSA (Radial Line Slot Array). Les dimensions de l'antenne (positions, dimensions et nombre de fentes) seront définies à l'aide des outils décrits ci-dessus. Les résultats numériques ainsi obtenus seront validés d'abord numériquement par analyse électromagnétique globale à l'aide de simulateurs commerciaux, puis expérimentalement en ondes millimétriques (mesure en zone de champ très proche). Pour atteindre ces objectifs, nous avons défini quatre tâches principales : Développement d'un outil de synthèse de champ dans l'ouverture rayonnante (formulation théorique couplée à une méthode dite des projections alternées) ; développement d'un outil de calcul rapide (sur la base de traitements par FFT) du champ électromagnétique rayonné en zone de champ proche par une ouverture rayonnante, et retro-propagation ; hybridation de ces algorithmes avec un code de calcul (méthode des moments) en cours de développement à l'IETR et dédié à l'analyse très rapide d'antennes en technologie SIW ; conception d'une preuve ou plusieurs preuves de concept, et validations numérique et expérimentale des concepts proposés. / With the demand for near-field antennas continuously growing, the antenna engineer is charged with the development of new concepts and design procedures for this regime. From the microwave and up to terahertz frequencies, a vast number of applications, especially in the biomedical domain, are in need for focused or shaped fields in the antenna proximity. This work proposes new theoretical methods for near-field shaping based on different optimization schemes. Continuous radiating planar apertures are optimized to radiate a near field with required characteristics. In particular, a versatile optimization technique based on the alternating projection scheme is proposed. It is demonstrated that, based on this scheme, it is feasible to achieve 3-D control of focal spots generated by planar apertures. Additionally, with the same setup, also the vectorial problem (shaping the norm of the field) is addressed. Convex optimization is additionally introduced for near-field shaping of continuous aperture sources. The capabilities of this scheme are demonstrated in the context of different shaping scenarios. Additionally, the discussion is extended to shaping the field in lossy stratified media, based on a spectral Green's functions approach. Besides, the biomedical applications of wireless power transfer to implants and breast cancer imaging are addressed. For the latter, an extensive study is included here, which delivers an outstanding improvement on the penetration depth at higher frequencies. The thesis is completed by several prototypes used for validation. Four different antennas have been designed, based either on the radial line slot array topology or on metasurfaces. The prototypes have been manufactured and measured, validating the overall approach of the thesis.
|
27 |
Design, Analysis, and Applications of Approximate Arithmetic ModulesUllah, Salim 06 April 2022 (has links)
From the initial computing machines, Colossus of 1943 and ENIAC of 1945, to modern high-performance data centers and Internet of Things (IOTs), four design goals, i.e., high-performance, energy-efficiency, resource utilization, and ease of programmability, have remained a beacon of development for the computing industry. During this period, the computing industry has exploited the advantages of technology scaling and microarchitectural enhancements to achieve these goals. However, with the end of Dennard scaling, these techniques have diminishing energy and performance advantages. Therefore, it is necessary to explore alternative techniques for satisfying the computational and energy requirements of modern applications. Towards this end, one promising technique is analyzing and surrendering the strict notion of correctness in various layers of the computation stack. Most modern applications across the computing spectrum---from data centers to IoTs---interact and analyze real-world data and take decisions accordingly. These applications are broadly classified as Recognition, Mining, and Synthesis (RMS). Instead of producing a single golden answer, these applications produce several feasible answers. These applications possess an inherent error-resilience to the inexactness of processed data and corresponding operations. Utilizing these applications' inherent error-resilience, the paradigm of Approximate Computing relaxes the strict notion of computation correctness to realize high-performance and energy-efficient systems with acceptable quality outputs.
The prior works on circuit-level approximations have mainly focused on Application-specific Integrated Circuits (ASICs). However, ASIC-based solutions suffer from long time-to-market and high-cost developing cycles. These limitations of ASICs can be overcome by utilizing the reconfigurable nature of Field Programmable Gate Arrays (FPGAs). However, due to architectural differences between ASICs and FPGAs, the utilization of ASIC-based approximation techniques for FPGA-based systems does not result in proportional performance and energy gains. Therefore, to exploit the principles of approximate computing for FPGA-based hardware accelerators for error-resilient applications, FPGA-optimized approximation techniques are required. Further, most state-of-the-art approximate arithmetic operators do not have a generic approximation methodology to implement new approximate designs for an application's changing accuracy and performance requirements. These works also lack a methodology where a machine learning model can be used to correlate an approximate operator with its impact on the output quality of an application. This thesis focuses on these research challenges by designing and exploring FPGA-optimized logic-based approximate arithmetic operators. As multiplication operation is one of the computationally complex and most frequently used arithmetic operations in various modern applications, such as Artificial Neural Networks (ANNs), we have, therefore, considered it for most of the proposed approximation techniques in this thesis.
The primary focus of the work is to provide a framework for generating FPGA-optimized approximate arithmetic operators and efficient techniques to explore approximate operators for implementing hardware accelerators for error-resilient applications.
Towards this end, we first present various designs of resource-optimized, high-performance, and energy-efficient accurate multipliers. Although modern FPGAs host high-performance DSP blocks to perform multiplication and other arithmetic operations, our analysis and results show that the orthogonal approach of having resource-efficient and high-performance multipliers is necessary for implementing high-performance accelerators. Due to the differences in the type of data processed by various applications, the thesis presents individual designs for unsigned, signed, and constant multipliers. Compared to the multiplier IPs provided by the FPGA Synthesis tool, our proposed designs provide significant performance gains. We then explore the designed accurate multipliers and provide a library of approximate unsigned/signed multipliers. The proposed approximations target the reduction in the total utilized resources, critical path delay, and energy consumption of the multipliers. We have explored various statistical error metrics to characterize the approximation-induced accuracy degradation of the approximate multipliers. We have also utilized the designed multipliers in various error-resilient applications to evaluate their impact on applications' output quality and performance.
Based on our analysis of the designed approximate multipliers, we identify the need for a framework to design application-specific approximate arithmetic operators. An application-specific approximate arithmetic operator intends to implement only the logic that can satisfy the application's overall output accuracy and performance constraints.
Towards this end, we present a generic design methodology for implementing FPGA-based application-specific approximate arithmetic operators from their accurate implementations according to the applications' accuracy and performance requirements. In this regard, we utilize various machine learning models to identify feasible approximate arithmetic configurations for various applications. We also utilize different machine learning models and optimization techniques to efficiently explore the large design space of individual operators and their utilization in various applications. In this thesis, we have used the proposed methodology to design approximate adders and multipliers.
This thesis also explores other layers of the computation stack (cross-layer) for possible approximations to satisfy an application's accuracy and performance requirements. Towards this end, we first present a low bit-width and highly accurate quantization scheme for pre-trained Deep Neural Networks (DNNs). The proposed quantization scheme does not require re-training (fine-tuning the parameters) after quantization. We also present a resource-efficient FPGA-based multiplier that utilizes our proposed quantization scheme. Finally, we present a framework to allow the intelligent exploration and highly accurate identification of the feasible design points in the large design space enabled by cross-layer approximations. The proposed framework utilizes a novel Polynomial Regression (PR)-based method to model approximate arithmetic operators. The PR-based representation enables machine learning models to better correlate an approximate operator's coefficients with their impact on an application's output quality.:1. Introduction
1.1 Inherent Error Resilience of Applications
1.2 Approximate Computing Paradigm
1.2.1 Software Layer Approximation
1.2.2 Architecture Layer Approximation
1.2.3 Circuit Layer Approximation
1.3 Problem Statement
1.4 Focus of the Thesis
1.5 Key Contributions and Thesis Overview
2. Preliminaries
2.1 Xilinx FPGA Slice Structure
2.2 Multiplication Algorithms
2.2.1 Baugh-Wooley’s Multiplication Algorithm
2.2.2 Booth’s Multiplication Algorithm
2.2.3 Sign Extension for Booth’s Multiplier
2.3 Statistical Error Metrics
2.4 Design Space Exploration and Optimization Techniques
2.4.1 Genetic Algorithm
2.4.2 Bayesian Optimization
2.5 Artificial Neural Networks
3. Accurate Multipliers
3.1 Introduction
3.2 Related Work
3.3 Unsigned Multiplier Architecture
3.4 Motivation for Signed Multipliers
3.5 Baugh-Wooley’s Multiplier
3.6 Booth’s Algorithm-based Signed Multipliers
3.6.1 Booth-Mult Design
3.6.2 Booth-Opt Design
3.6.3 Booth-Par Design
3.7 Constant Multipliers
3.8 Results and Discussion
3.8.1 Experimental Setup and Tool Flow
3.8.2 Performance comparison of the proposed accurate unsigned multiplier
3.8.3 Performance comparison of the proposed accurate signed multiplier with the state-of-the-art accurate multipliers
3.8.4 Performance comparison of the proposed constant multiplier with the state-of-the-art accurate multipliers
3.9 Conclusion
4. Approximate Multipliers
4.1 Introduction
4.2 Related Work
4.3 Unsigned Approximate Multipliers
4.3.1 Approximate 4 × 4 Multiplier (Approx-1)
4.3.2 Approximate 4 × 4 Multiplier (Approx-2)
4.3.3 Approximate 4 × 4 Multiplier (Approx-3)
4.4 Designing Higher Order Approximate Unsigned Multipliers
4.4.1 Accurate Adders for Implementing 8 × 8 Approximate Multipliers from 4 × 4 Approximate Multipliers
4.4.2 Approximate Adders for Implementing Higher-order Approximate Multipliers
4.5 Approximate Signed Multipliers (Booth-Approx)
4.6 Results and Discussion
4.6.1 Experimental Setup and Tool Flow
4.6.2 Evaluation of the Proposed Approximate Unsigned Multipliers
4.6.3 Evaluation of the Proposed Approximate Signed Multiplier
4.7 Conclusion
5. Designing Application-specific Approximate Operators
5.1 Introduction
5.2 Related Work
5.3 Modeling Approximate Arithmetic Operators
5.3.1 Accurate Multiplier Design
5.3.2 Approximation Methodology
5.3.3 Approximate Adders
5.4 DSE for FPGA-based Approximate Operators Synthesis
5.4.1 DSE using Bayesian Optimization
5.4.2 MOEA-based Optimization
5.4.3 Machine Learning Models for DSE
5.5 Results and Discussion
5.5.1 Experimental Setup and Tool Flow
5.5.2 Accuracy-Performance Analysis of Approximate Adders
5.5.3 Accuracy-Performance Analysis of Approximate Multipliers
5.5.4 AppAxO MBO
5.5.5 ML Modeling
5.5.6 DSE using ML Models
5.5.7 Proposed Approximate Operators
5.6 Conclusion
6. Quantization of Pre-trained Deep Neural Networks
6.1 Introduction
6.2 Related Work
6.2.1 Commonly Used Quantization Techniques
6.3 Proposed Quantization Techniques
6.3.1 L2L: Log_2_Lead Quantization
6.3.2 ALigN: Adaptive Log_2_Lead Quantization
6.3.3 Quantitative Analysis of the Proposed Quantization Schemes
6.3.4 Proposed Quantization Technique-based Multiplier
6.4 Results and Discussion
6.4.1 Experimental Setup and Tool Flow
6.4.2 Image Classification
6.4.3 Semantic Segmentation
6.4.4 Hardware Implementation Results
6.5 Conclusion
7. A Framework for Cross-layer Approximations
7.1 Introduction
7.2 Related Work
7.3 Error-analysis of approximate arithmetic units
7.3.1 Application Independent Error-analysis of Approximate Multipliers
7.3.2 Application Specific Error Analysis
7.4 Accelerator Performance Estimation
7.5 DSE Methodology
7.6 Results and Discussion
7.6.1 Experimental Setup and Tool Flow
7.6.2 Behavioral Analysis
7.6.3 Accelerator Performance Estimation
7.6.4 DSE Performance
7.7 Conclusion
8. Conclusions and Future Work
|
28 |
TURBULENCE-INFORMED PREDICTIVE MODELING FOR RESILIENT SYSTEMS IN EMERGING GLOBAL CHALLENGES: APPLICATIONS IN RENEWABLE ENERGY MANAGEMENT AND INDOOR AIRBORNE TRANSMISSION CONTROLJhon Jairo Quinones Cortes (17592753) 09 December 2023 (has links)
<p dir="ltr">Evidence for climate change-related impacts and risks is already widespread globally, affecting not only the ecosystems but also the economy and health of our communities. Data-driven predictive modeling approaches such as machine learning and deep learning have emerged to be powerful tools for interpreting large and complex non-linear datasets such as meteorological variables from weather stations or the distribution of infectious droplets produced in a cough. However, the strength of these data-driven models can be further optimized by complementing them with foundational knowledge of the physical processes they represent. By understanding the core physics, one can enhance the reliability and accuracy of predictive outcomes. The effectiveness of these combined approaches becomes particularly feasible and robust with the recent advancements in the High-Performance Computing field. With improved processing speed, algorithm design, and storage capabilities, modern computers allow for a deeper and more precise examination of the data. Such advancements equip us to address the diverse challenges presented by climate change more effectively.</p><p dir="ltr">In particular, this document advances research in mitigating and preventing the consequences of global warming by implementing data-driven predictive models based on statistical, machine learning, and deep learning methods via two phases. In the first phase, this dissertation proposes frameworks consisting of machine and deep learning algorithms to increase the resilience of small-scale renewable energy systems, which are essential for reducing greenhouse gas emissions in the ecosystems. The second phase focuses on using data from physics-based models, i.e., computational fluid dynamics (CFD), in data-driven predictive models for improving the design of air cleaning technologies, which are crucial to reducing the transmission of infectious diseases in indoor environments. </p><p dir="ltr">Specifically, this work is an article-based collection of published (or will be published) research articles. The articles are reformatted to fit the thesis's structure. The contents of the original articles are self-contained. </p>
|
Page generated in 0.1072 seconds