• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 14
  • 13
  • 12
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Hierarchical Logcut : A Fast And Efficient Way Of Energy Minimization Via Graph Cuts

Kulkarni, Gaurav 06 1900 (has links) (PDF)
Graph cuts have emerged as an important combinatorial optimization tool for many problems in vision. Most of the computer vision problems are discrete labeling problems. For example, in stereopsis, labels represent disparity and in image restoration, labels correspond to image intensities. Finding a good labeling involves optimization of an Energy Function. In computer vision, energy functions for discrete labeling problems can be elegantly formulated through Markov Random Field (MRF) based modeling and graph cut algorithms have been found to efficiently optimize wide class of such energy functions. The main contribution of this thesis lies in developing an efficient combinatorial optimization algorithm which can be applied to a wide class of energy functions. Generally, graph cut algorithms deal sequentially with each label in the labeling problem at hand. The time complexity of these algorithms increases linearly with number of labels. Our algorithm, finds a solution/labeling in logarithmic time complexity without compromising on quality of solution. In our work, we present an improved Logcut algorithm [24]. Logcut algorithm [24] deals with finding individual bit values in integer representation of labels. It has logarithmic time complexity, but requires training over data set. Our improved Logcut (Heuristic-Logcut or H-Logcut) algorithm eliminates the need for training and obtains comparable results in respect to original Logcut algorithm. Original Logcut algorithm cannot be initialized by a known labeling. We present a new algorithm, Sequential Bit Plane Correction (SBPC) which overcomes this drawback of Logcut algorithm. SBPC algorithm starts from a known labeling and individually corrects each bit of a label. This algorithm too has logarithmic time complexity. SBPC in combination with H-Logcut algorithm, further improves rate of convergence and quality of results. Finally, a hierarchical approach to graph cut optimization is used to further improve on rate of convergence of our algorithm. Generally, in a hierarchical approach first, a solution at coarser level is computed and then its result is used to initialize algorithm at a finer level. Here we have presented a novel way of initializing the algorithm at finer level through fusion move [25]. The SBPC and H-Logcut algorithms are extended to accommodate for hierarchical approach. It is found that this approach drastically improves the rate of convergence and attains a very low energy labeling. The effectiveness of our approach is demonstrated on stereopsis. It is found that the algorithm significantly out performs all existing algorithms in terms of quality of solution as well as rate of convergence.
52

Algorithmic and Graph-Theoretic Approaches for Optimal Sensor Selection in Large-Scale Systems

Lintao Ye (9741149) 15 December 2020 (has links)
<div>Using sensor measurements to estimate the states and parameters of a system is a fundamental task in understanding the behavior of the system. Moreover, as modern systems grow rapidly in scale and complexity, it is not always possible to deploy sensors to measure all of the states and parameters of the system, due to cost and physical constraints. Therefore, selecting an optimal subset of all the candidate sensors to deploy and gather measurements of the system is an important and challenging problem. In addition, the systems may be targeted by external attackers who attempt to remove or destroy the deployed sensors. This further motivates the formulation of resilient sensor selection strategies. In this thesis, we address the sensor selection problem under different settings as follows. </div><div><br></div><div>First, we consider the optimal sensor selection problem for linear dynamical systems with stochastic inputs, where the Kalman filter is applied based on the sensor measurements to give an estimate of the system states. The goal is to select a subset of sensors under certain budget constraints such that the trace of the steady-state error covariance of the Kalman filter with the selected sensors is minimized. We characterize the complexity of this problem by showing that the Kalman filtering sensor selection problem is NP-hard and cannot be approximated within any constant factor in polynomial time for general systems. We then consider the optimal sensor attack problem for Kalman filtering. The Kalman filtering sensor attack problem is to attack a subset of selected sensors under certain budget constraints in order to maximize the trace of the steady-state error covariance of the Kalman filter with sensors after the attack. We show that the same results as the Kalman filtering sensor selection problem also hold for the Kalman filtering sensor attack problem. Having shown that the general sensor selection and sensor attack problems for Kalman filtering are hard to solve, our next step is to consider special classes of the general problems. Specifically, we consider the underlying directed network corresponding to a linear dynamical system and investigate the case when there is a single node of the network that is affected by a stochastic input. In this setting, we show that the corresponding sensor selection and sensor attack problems for Kalman filtering can be solved in polynomial time. We further study the resilient sensor selection problem for Kalman filtering, where the problem is to find a sensor selection strategy under sensor selection budget constraints such that the trace of the steady-state error covariance of the Kalman filter is minimized after an adversary removes some of the deployed sensors. We show that the resilient sensor selection problem for Kalman filtering is NP-hard, and provide a pseudo-polynomial-time algorithm to solve it optimally.</div><div> </div><div> Next, we consider the sensor selection problem for binary hypothesis testing. The problem is to select a subset of sensors under certain budget constraints such that a certain metric of the Neyman-Pearson (resp., Bayesian) detector corresponding to the selected sensors is optimized. We show that this problem is NP-hard if the objective is to minimize the miss probability (resp., error probability) of the Neyman-Pearson (resp., Bayesian) detector. We then consider three optimization objectives based on the Kullback-Leibler distance, J-Divergence and Bhattacharyya distance, respectively, in the hypothesis testing sensor selection problem, and provide performance bounds on greedy algorithms when applied to the sensor selection problem associated with these optimization objectives.</div><div> </div><div> Moving beyond the binary hypothesis setting, we also consider the setting where the true state of the world comes from a set that can have cardinality greater than two. A Bayesian approach is then used to learn the true state of the world based on the data streams provided by the data sources. We formulate the Bayesian learning data source selection problem under this setting, where the goal is to minimize the cost spent on the data sources such that the learning error is within a certain range. We show that the Bayesian learning data source selection is also NP-hard, and provide greedy algorithms with performance guarantees.</div><div> </div><div> Finally, in light of the COVID-19 pandemic, we study the parameter estimation measurement selection problem for epidemics spreading in networks. Here, the measurements (with certain costs) are collected by conducting virus and antibody tests on the individuals in the epidemic spread network. The goal of the problem is then to optimally estimate the parameters (i.e., the infection rate and the recovery rate of the virus) in the epidemic spread network, while satisfying the budget constraint on collecting the measurements. Again, we show that the measurement selection problem is NP-hard, and provide approximation algorithms with performance guarantees.</div>
53

Optimization Algorithm Based on Novelty Search Applied to the Treatment of Uncertainty in Models

Martínez Rodríguez, David 23 December 2021 (has links)
[ES] La búsqueda novedosa es un nuevo paradigma de los algoritmos de optimización, evolucionarios y bioinspirados, que está basado en la idea de forzar la búsqueda del óptimo global en aquellas partes inexploradas del dominio de la función que no son atractivas para el algoritmo, con la intención de evitar estancamientos en óptimos locales. La búsqueda novedosa se ha aplicado al algoritmo de optimización de enjambre de partículas, obteniendo un nuevo algoritmo denominado algoritmo de enjambre novedoso (NS). NS se ha aplicado al conjunto de pruebas sintéticas CEC2005, comparando los resultados con los obtenidos por otros algoritmos del estado del arte. Los resultados muestran un mejor comportamiento de NS en funciones altamente no lineales, a cambio de un aumento en la complejidad computacional. En lo que resta de trabajo, el algoritmo NS se ha aplicado en diferentes modelos, específicamente en el diseño de un motor de combustión interna, en la estimación de demanda de energía mediante gramáticas de enjambre, en la evolución del cáncer de vejiga de un paciente concreto y en la evolución del COVID-19. Cabe remarcar que, en el estudio de los modelos de COVID-19, se ha tenido en cuenta la incertidumbre, tanto de los datos como de la evolución de la enfermedad. / [CA] La cerca nova és un nou paradigma dels algoritmes d'optimització, evolucionaris i bioinspirats, que està basat en la idea de forçar la cerca de l'òptim global en les parts inexplorades del domini de la funció que no són atractives per a l'algoritme, amb la intenció d'evitar estancaments en òptims locals. La cerca nova s'ha aplicat a l'algoritme d'optimització d'eixam de partícules, obtenint un nou algoritme denominat algoritme d'eixam nou (NS). NS s'ha aplicat al conjunt de proves sintètiques CEC2005, comparant els resultats amb els obtinguts per altres algoritmes de l'estat de l'art. Els resultats mostren un millor comportament de NS en funcions altament no lineals, a canvi d'un augment en la complexitat computacional. En el que resta de treball, l'algoritme NS s'ha aplicat en diferents models, específicament en el disseny d'un motor de combustió interna, en l'estimació de demanda d'energia mitjançant gramàtiques d'eixam, en l'evolució del càncer de bufeta d'un pacient concret i en l'evolució del COVID-19. Cal remarcar que, en l'estudi dels models de COVID-19, s'ha tingut en compte la incertesa, tant de les dades com de l'evolució de la malaltia. / [EN] Novelty Search is a recent paradigm in evolutionary and bio-inspired optimization algorithms, based on the idea of forcing to look for those unexplored parts of the domain of the function that might be unattractive for the algorithm, with the aim of avoiding stagnation in local optima. Novelty Search has been applied to the Particle Swarm Optimization algorithm, obtaining a new algorithm named Novelty Swarm (NS). NS has been applied to the CEC2005 benchmark, comparing its results with other state of the art algorithms. The results show better behaviour in high nonlinear functions at the cost of increasing the computational complexity. During the rest of the thesis, the NS algorithm has been used in different models, specifically the design of an Internal Combustion Engine, the prediction of energy demand estimation with Grammatical Swarm, the evolution of the bladder cancer of a specific patient and the evolution of COVID-19. It is also remarkable that, in the study of COVID-19 models, uncertainty of the data and the evolution of the disease has been taken in account. / Martínez Rodríguez, D. (2021). Optimization Algorithm Based on Novelty Search Applied to the Treatment of Uncertainty in Models [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/178994 / TESIS
54

Electrochemical model based fault diagnosis of lithium ion battery

Rahman, Md Ashiqur 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A gradient free function optimization technique, namely particle swarm optimization (PSO) algorithm, is utilized in parameter identification of the electrochemical model of a Lithium-Ion battery having a LiCoO2 chemistry. Battery electrochemical model parameters are subject to change under severe or abusive operating conditions resulting in, for example, Navy over-discharged battery, 24-hr over-discharged battery, and over-charged battery. It is important for a battery management system to have these parameters changes fully captured in a bank of battery models that can be used to monitor battery conditions in real time. In this work, PSO methodology has been used to identify four electrochemical model parameters that exhibit significant variations under severe operating conditions. The identified battery models were validated by comparing the model output voltage with the experimental output voltage for the stated operating conditions. These identified conditions of the battery were then used to monitor condition of the battery that can aid the battery management system (BMS) in improving overall performance. An adaptive estimation technique, namely multiple model adaptive estimation (MMAE) method, was implemented for this purpose. In this estimation algorithm, all the identified models were simulated for a battery current input profile extracted from the hybrid pulse power characterization (HPPC) cycle simulation of a hybrid electric vehicle (HEV). A partial differential algebraic equation (PDAE) observer was utilized to obtain the estimated voltage, which was used to generate the residuals. Analysis of these residuals through MMAE provided the probability of matching the current battery operating condition to that of one of the identified models. Simulation results show that the proposed model based method offered an accurate and effective fault diagnosis of the battery conditions. This type of fault diagnosis, which is based on the models capturing true physics of the battery electrochemistry, can lead to a more accurate and robust battery fault diagnosis and help BMS take appropriate steps to prevent battery operation in any of the stated severe or abusive conditions.
55

Development of ABAQUS-MATLAB Interface for Design Optimization using Hybrid Cellular Automata and Comparison with Bidirectional Evolutionary Structural Optimization

Alen Antony (11353053) 03 January 2022 (has links)
<div>Topology Optimization is an optimization technique used to synthesize models without any preconceived shape. These structures are synthesized keeping in mind the minimum compliance problems. With the rapid improvement in advanced manufacturing technology and increased need for lightweight high strength designs topology optimization is being used more than ever.</div><div>There exist a number of commercially available software's that can be used for optimizing a product. These software have a robust Finite Element Solver and can produce good results. However, these software offers little to no choice to the user when it comes to selecting the type of optimization method used.</div><div>It is possible to use a programming language like MATLAB to develop algorithms that use a specific type of optimization method but the user himself will be responsible for writing the FEA algorithms too. This leads to a situation where the flexibility over the optimization method is achieved but the robust FEA of the commercial FEA tool is lost.</div><div>There have been works done in the past that links ABAQUS with MATLAB but they are primarily used as a tool for finite element post-processing. Through this thesis, the aim is to develop an interface that can be used for solving optimization problems using different methods like hard-kill as well as the material penalization (SIMP) method. By doing so it's possible to harness the potential of a commercial FEA software and gives the user the requires flexibility to write or modify the codes to have an optimization method of his or her choice. Also, by implementing this interface, it can also be potentially used to unlock the capabilities of other Dassault Systèmes software's as the firm is implementing a tighter integration between all its products using the 3DExperience platform.</div><div>This thesis as described uses this interface to implement BESO and HCA based topology optimization. Since hybrid cellular atomata is the only other method other than equivalent static load method that can be used for crashworthiness optimization this work suits well for the role when extended into a non-linear region.</div>
56

Analysis and CFD-Guided optimization of advanced combustion systems in compression-ignited engines

Spohr Fernandes, Cássio 12 May 2023 (has links)
[ES] Reducir las emisiones de gases contaminantes de los motores de combustión interna alternativos (MCIA) es uno de los mayores retos para combatir el calentamiento global. Dado que los motores seguirán siendo utilizados por la industria durante décadas, es necesario desarrollar nuevas tecnologías. En este contexto, la presente tesis doctoral viene motivada por la necesidad de seguir mejorando los motores, tanto desde el punto de vista de la ingeniería técnica como desde el punto de vista social, debido a los efectos de los gases de efecto invernadero. El objetivo principal de esta tesis es desarrollar una metodología de optimización para sistemas de combustión de motores de encendido por compresión (MEC) mediante el acoplamiento de algoritmos de optimización con simulación por ordenador. Con la optimización de los sistemas de combustión es posible aumentar la eficiencia de los motores, reduciendo así el consumo de combustible junto con la reducción de emisiones contaminantes, en particular óxidos de nitrógeno (NOx) y hollín. En el primer paso, se abordan diferentes algoritmos de optimización con el fin de elegir el mejor candidato para esta metodología. A partir de aquí, la primera optimización se centra en un motor de encendido por compresión que funciona con combustible convencional para validar la metodología y también para evaluar el estado actual de evolución de estos motores. Con el objetivo de reducir el consumo de combustible manteniendo los niveles de NOx y hollín por debajo de los valores de un motor real, se inicia el proceso de optimización. Los resultados obtenidos confirman que un nuevo sistema de combustión específico para este motor podría generar una reducción del consumo de combustible manteniendo las emisiones de gases por debajo del valor estipulado. Además, se concluye que los motores MEC que utilizan combustible convencional se encuentran ya en un nivel de eficiencia muy elevado, y es difícil mejorarlos sin utilizar un sistema de postratamiento. Así pues, el segundo bloque de optimización se basa en el uso de motores MEC que funcionan con un combustible alternativo, que en este caso es el OME. El objetivo de este estudio es diseñar un sistema de combustión específico para un motor que utilice este combustible y que ofrezca un rendimiento del mismo orden de magnitud que un motor diésel. En la búsqueda de una mayor eficiencia, las emisiones de NOx son una restricción del sistema de optimización para que el sistema de combustión no emita más gases que un motor real. En este caso, el hollín no se tiene en cuenta debido a que las características del combustible no producen este tipo de contaminante. Los resultados mostraron que un sistema de combustión diseñado específicamente para esta operación podía ofrecer altas eficiencias, incluso la eficiencia obtenida fue alrededor de 2,2 % mayor en comparación con el motor diesel real. Además, fue posible reducir a la mitad las emisiones de NOx cuando el motor funciona con OME. El último bloque de optimización se refiere a una nueva arquitectura de motor que permite eliminar las emisiones de NOx. El modelo de oxicombustión resulta apasionante, ya que se elimina el nitrógeno de la mezcla de admisión y, por tanto, no se generan emisiones que contengan N2. Además, con el uso de este modo de combustión, es posible capturar CO$_{2}$ de los gases de escape, que luego puede venderse en el mercado. Dado que se trata de un tema nuevo y poco investigado, los resultados son prometedores. Demuestran que fue posible obtener un sistema de combustión específico capaz de ofrecer niveles de eficiencia cercanos a los de los motores convencionales. Además, se eliminaron las emisiones de NOx, así como las de hollín. Adicionalmente, este sistema fue capaz de reducir las emisiones de CO y HC a niveles similares a los motores convencionales. Por otra parte, los resultados presentados en esta tesis doctoral proporcionan una base de datos ampliada para explorar el funcionamiento del motor CI. / [CAT] Reduir les emissions de gasos contaminants dels motors de combustió interna alternatius (MCIA) és un dels majors reptes per a combatre el camvi climàtic. Atés que els motors continuaran sent utilitzats per la indústria durant dècades, és necessari desenvolupar noves tecnologies. En aquest context, la present tesi doctoral ve motivada per la necessitat de continuar millorant els motors, tant des del punt de vista de l'enginyeria tècnica com des del punt de vista social, degut a l'efecte dels gasos d'efecte d'hivernacle. L'objectiu principal d'aquesta tesi és desenvolupar una metodologia d'optimització per a sistemes de combustió de motors d'encesa provocada mitjançant l'acoblament d'algorismes d'optimització amb simulació per ordinador. Amb l'optimització dels sistemes de combustió és possible augmentar l'eficiència dels motors, reduint així el consum de combustible, concomitantment amb la reducció d'emissions de gasos, en particular òxids de nitrogen (NOx) i sutge. En el primer pas, s'aborden diferents algorismes d'optimització amb la finalitat d'elegir el millor candidat per a aquesta metodologia. A partir d'ací, la primera optimització se centra en un motor d'encesa per compressió que funciona amb combustible convencional per a validar la metodologia i també per a avaluar l'estat actual d'evolució d'aquests motors. Amb l'objectiu de reduir el consum de combustible mantenint els nivells de NOx i sutge per davall dels valors d'un motor real, s'inicia el procés d'optimització. Els resultats obtinguts confirmen que un nou sistema de combustió específic per a aquest motor podria generar una reducció del consum de combustible mantenint les emissions de gasos per davall del valor estipulat. A més, es conclou que els motors d'encesa per compressió que utilitzen combustible convencional es troben ja en un nivell d'eficiència molt elevat, i és difícil millorar-los sense utilitzar un sistema de posttractament. Així doncs, el segon bloc d'optimització es basa en l'ús de motors d'encesa per compressió que funcionen amb un combustible alternatiu, que en aquest cas és el OME. L'objectiu d'aquest estudi és dissenyar un sistema de combustió específic per a un motor que utilitze aquest combustible i que oferisca un rendiment del mateix ordre de magnitud que un motor dièsel. En la cerca d'una major eficiència, les emissions de NOx són una restricció del sistema d'optimització perquè el sistema de combustió no emeta més gasos que un motor real. En aquest cas, el sutge no es té en compte pel fet que les característiques del combustible no produeixen aquest tipus de contaminant. Els resultats van mostrar que un sistema de combustió dissenyat específicament per a aquesta operació podia oferir altes eficiències, fins i tot l'eficiència obtinguda va ser al voltant de 2,2 % major en comparació amb el motor dièsel real. A més, va ser possible reduir a la meitat les emissions de NOx quan el motor funciona amb OME. L'últim bloc d'optimització es refereix a una nova arquitectura del motor que permet eliminar les emissions de NOx. El model de oxicombustió resulta apassionant, ja que s'elimina el nitrogen de la mescla d'admissió i, per tant, no es generen emissions que continguen N2. A més, amb l'ús d'aquesta manera de combustió, és possible capturar CO$_{2}$ dels gasos de fuita, que després pot vendre's en el mercat. Atés que es tracta d'un tema nou i poc investigat, els resultats són prometedors. Demostren que va ser possible obtindre un sistema de combustió específic capaç d'oferir nivells d'eficiència pròxims als dels motors convencionals. A més, es van eliminar les emissions de NOx, així com les de sutge. Addicionalment, aquest sistema va ser capaç de reduir les emissions de CO i HC a nivells similars als motors convencionals. D'altra banda, els resultats presentats en aquesta tesi doctoral proporcionen una base de dades ampliada per a explorar el funcionament del motor CI. / [EN] Reducing emissions of pollutant gases from internal combustion engines (ICE) is one of the biggest challenges to combat global warming. As the engines will continue to be used by industry for decades, it is necessary to develop new technologies. In this context, the present doctoral thesis was motivated by the need to further improve engines, both from a technical engineering and social point of view, due to the effects of greenhouse gases. The main objective of this thesis is to develop an optimization methodology for compression ignition (CI) engine combustion systems by coupling optimization algorithms with computer simulation. With the optimization of the combustion systems, it is possible to increase the efficiency of the engines, thus reducing fuel consumption, concomitantly with the reduction of gas emissions, in particular nitrogen oxides (NOx) and soot. In the first step, different optimization algorithms are addressed in order to elect the best candidate for this methodology. From this point on, the first optimization is focused on a CI engine operating with conventional fuel in order to validate the methodology and also to evaluate the current state of evolution of these engines. With the goal of reducing fuel consumption while keeping NOx and soot levels below the values of a real engine, the optimization process begins. The results obtained confirm that a new combustion system specifically for this engine could generate a reduction in fuel consumption while keeping gas emissions below the stipulated value. Furthermore, it is concluded that CI engines using conventional fuel are already at a very high-efficiency level, and it is difficult to improve them without the use of an after-treatment system. Thus, the second optimization block is based on the use of CI engines operating on an alternative fuel, which in this case is OME. This study aimed to design a specific combustion system for an engine using this fuel that delivers efficiency on the same order of magnitude as a diesel engine. While searching for better efficiency, the NOx emissions are a restriction of the optimization system so that the combustion system does not emit more gases than a real engine. In this case, soot is not considered due to the characteristics of the fuel not producing this kind of pollutant. The results showed that a combustion system designed specifically for this operation could deliver high efficiencies, including the efficiency obtained was around 2.2 \% higher compared to the real diesel engine. In addition, it was possible to halve the NOx emissions when the engine operates with OME. The last optimization block concerns a new engine architecture that makes it possible to eliminate NOx emissions. The oxy-fuel combustion model is exciting since nitrogen is eliminated from the intake mixture, and thus no emissions containing N2 are generated. Furthermore, with the use of this combustion mode, it is possible to capture CO$_{2}$ from the exhaust gas, which can then be sold to the market. Since this is a new and little-researched topic, the results are promising. They show that it was possible to obtain a specific combustion system capable of delivering efficiency levels close to conventional engines. Furthermore, NOx emissions were eliminated, as well as soot emissions. Additionally, this system was able to reduce CO and HC emissions to levels similar to conventional engines. Moreover, the results presented in this doctoral thesis provide an extended database to explore the CI engine operation. Additionally, this work showed the potential of computational simulation allied with mathematical methods in order to design combustion systems for different applications. / I want to thanks the Universitat Politecnica de Valencia for his predoctoral contract (FPI-2019-S2-20-555), which is included within the framework of Programa de Apoyo para la Investigacion y Desarrollo (PAID). / Spohr Fernandes, C. (2023). Analysis and CFD-Guided optimization of advanced combustion systems in compression-ignited engines [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/193292
57

Development of a Metamaterial-Based Foundation System for the Seismic Protection of Fuel Storage Tanks

Wenzel, Moritz 14 April 2020 (has links)
Metamaterials are typically described as materials with ’unusual’ wave propagation properties. Originally developed for electromagnetic waves, these materials have also spread into the field of acoustic wave guiding and cloaking, with the most relevant of these ’unusual’ properties, being the so called band-gap phenomenon. A band-gap signifies a frequency region where elastic waves cannot propagate through the material, which in principle, could be used to protect buildings from earthquakes. Based on this, two relevant concepts have been proposed in the field of seismic engineering, namely: metabarriers, and metamaterial-based foundations. This thesis deals with the development of the Metafoundation, a metamaterial-based foundation system for the seismic protection of fuel storage tanks against excessive base shear and pipeline rupture. Note that storage tanks have proven to be highly sensitive to earthquakes, can trigger sever economic and environmental consequences in case of failure and were therefore chosen as a superstructure for this study. Furthermore, when tanks are protected with traditional base isolation systems, the resulting horizontal displacements, during seismic action, may become excessively large and subsequently damage connected pipelines. A novel system to protect both, tank and pipeline, could significantly augment the overall safety of industrial plants. With the tank as the primary structure of interest in mind, the Metafoundation was conceived as a locally resonant metamaterial with a band gap encompassing the tanks critical eigenfrequency. The initial design comprised a continuous concrete matrix with embedded resonators and rubber inclusions, which was later reinvented to be a column based structure with steel springs for resonator suspension. After investigating the band-gap phenomenon, a parametric study of the system specifications showed that the horizontal stiffness of the overall foundation is crucial to its functionality, while the superstructure turned out to be non-negligible when tuning the resonators. Furthermore, storage tanks are commonly connected to pipeline system, which can be damaged by the interaction between tank and pipeline during seismic events. Due to the complex and nonlinear response of pipeline systems, the coupled tank-pipeline behaviour becomes increasingly difficult to represent through numerical models, which lead to the experimental study of a foundation-tank-pipeline setup. Under the aid of a hybrid simulation, only the pipeline needed to be represented via a physical substructure, while both tank and Metafoundation were modelled as numerical substrucutres and coupled to the pipeline. The results showed that the foundation can effectively reduce the stresses in the tank and, at the same time, limit the displacements imposed on the pipeline. Leading up on this, an optimization algorithm was developed in the frequency domain, under the consideration of superstructure and ground motion spectrum. The advantages of optimizing in the frequency domain were on the one hand the reduction of computational effort, and on the other hand the consideration of the stochastic nature of the earthquake. Based on this, two different performance indices, investigating interstory drifts and energy dissipation, revealed that neither superstructure nor ground motion can be disregarded when designing a metamaterial-based foundation. Moreover, a 4 m tall optimized foundation, designed to remain elastic when verified with a response spectrum analysis at a return period of 2475 years (according to NTC 2018), reduced the tanks base shear on average by 30%. These results indicated that the foundation was feasible and functional in terms of construction practices and dynamic response, yet unpractical from an economic point of view. In order to tackle the issue of reducing the uneconomic system size, a negative stiffness mechanism was invented and implemented into the foundation as a periodic structure. This mechanism, based on a local instability, amplified the metamaterial like properties and thereby enhanced the overall system performance. Note that due to the considered instability, the device exerted a nonlinear force-displacement relationship, which had the interesting effect of reducing the band-gap instead of increasing it. Furthermore, time history analyses demonstrated that with 50% of the maximum admissible negative stiffness, the foundation could be reduced to 1/3 of its original size, while maintaining its performance. Last but not least, a study on wire ropes as resonator suspension was conducted. Their nonlinear behaviour was approximated with the Bouc Wen model, subsequently linearized by means of stochastic techniques and finally optimized with the algorithm developed earlier. The conclusion was that wire ropes could be used as a more realistic suspension mechanism, while maintaining the high damping values required by the optimized foundation layouts. In sum, a metamaterial-based foundation system is developed and studied herein, with the main findings being: (i) a structure of this type is feasible under common construction practices; (ii) the shear stiffness of the system has a fundamental impact on its functionality; (iii) the superstructure cannot be neglected when studying metamaterial-based foundations; (iv) the complete coupled system can be tuned with an optimization algorithm based on calculations in the frequency domain; (v) an experimental study suggests that the system could be advantageous to connected pipelines; (vi) wire ropes may serve as resonator suspension; and (vii) a novel negative stiffness mechanism can effectively improve the system performance.
58

<b>OPTIMIZATION OF ENERGY MANAGEMENT STRATEGIES FOR FUEL-CELL HYBRID ELECTRIC AIRCRAFT</b>

Ayomide Samuel Oke (14594948) 23 April 2024 (has links)
<p dir="ltr">Electric aircraft offer a promising avenue for reducing aviation's environmental impact through decreased greenhouse gas emissions and noise pollution. Nonetheless, their adoption is hindered by the challenge of limited operational range. Addressed in the study is the range limitation by integrating and optimizing multiple energy storage components—hydrogen fuel cells, Li-ion batteries, and ultracapacitors—through advanced energy management strategies. Utilizing meta-heuristic optimization methods, the research assessed the dynamic performance of each energy component and the effectiveness of the energy management strategy, primarily measured by the hydrogen consumption rate. MATLAB simulations validated the proposed approach, indicating a decrease in hydrogen usage, thus enhancing efficiency and potential cost savings. Artificial Gorilla Troop Optimization yielded the best results with the lowest average hydrogen consumption rate (102.62 grams), outperforming Particle Swarm Optimization (104.68 grams) and Ant Colony Optimization (105.96 grams). The findings suggested that employing a combined energy storage and optimization strategy can significantly improve the operational efficiency and energy conservation of electric aircraft. The study highlighted the potential of such strategies to extend the range of electric aircraft, contributing to a more sustainable aviation future.</p>
59

Realisierung einer Schedulingumgebung für gemischt-parallele Anwendungen und Optimierung von layer-basierten Schedulingalgorithmen

Kunis, Raphael 20 January 2011 (has links)
Eine Herausforderung der Parallelverarbeitung ist das Erreichen von Skalierbarkeit großer paralleler Anwendungen für verschiedene parallele Systeme. Das zentrale Problem ist, dass die Ausführung einer Anwendung auf einem parallelen System sehr gut sein kann, die Portierung auf ein anderes System in der Regel jedoch zu schlechten Ergebnissen führt. Durch die Verwendung des Programmiermodells der parallelen Tasks mit Abhängigkeiten kann die Skalierbarkeit für viele parallele Algorithmen deutlich verbessert werden. Die Programmierung mit parallelen Tasks führt zu Task-Graphen mit Abhängigkeiten zur Darstellung einer parallelen Anwendung, die auch als gemischt-parallele Anwendung bezeichnet wird. Die Grundlage für eine effiziente Abarbeitung einer gemischt-parallelen Anwendung bildet ein geeigneter Schedule, der eine effiziente Abbildung der parallelen Tasks auf die Prozessoren des parallelen Systems vorgibt. Für die Berechnung eines Schedules werden Schedulingalgorithmen eingesetzt. Ein zentrales Problem bei der Bestimmung eines Schedules für gemischt-parallele Anwendungen besteht darin, dass das Scheduling bereits für Single-Prozessor-Tasks mit Abhängigkeiten und ein paralleles System mit zwei Prozessoren NP-hart ist. Daher existieren lediglich Approximationsalgorithmen und Heuristiken um einen Schedule zu berechnen. Eine Möglichkeit zur Berechnung eines Schedules sind layerbasierte Schedulingalgorithmen. Diese Schedulingalgorithmen bilden zuerst Layer unabhängiger paralleler Tasks und berechnen den Schedule für jeden Layer separat. Eine Schwachstelle dieser Schedulingalgorithmen ist das Zusammenfügen der einzelnen Schedules zum globalen Schedule. Der vorgestellte Algorithmus Move-blocks bietet eine elegante Möglichkeit das Zusammenfügen zu verbessern. Dies geschieht durch eine Verschmelzung der Schedules aufeinander folgender Layer. Obwohl eine Vielzahl an Schedulingalgorithmen für gemischt-parallele Anwendungen existiert, gibt es bislang keine umfassende Unterstützung des Schedulings durch Programmierwerkzeuge. Im Besonderen gibt es keine Schedulingumgebung, die eine Vielzahl an Schedulingalgorithmen in sich vereint. Die Vorstellung der flexiblen, komponentenbasierten und erweiterbaren Schedulingumgebung SEParAT ist der zweite Fokus dieser Dissertation. SEParAT unterstützt verschiedene Nutzungsszenarien, die weit über das reine Scheduling hinausgehen, z.B. den Vergleich von Schedulingalgorithmen und die Erweiterung und Realisierung neuer Schedulingalgorithmen. Neben der Vorstellung der Nutzungsszenarien werden sowohl die interne Verarbeitung eines Schedulingdurchgangs als auch die komponentenbasierte Softwarearchitektur detailliert vorgestellt.
60

DISTRIBUTED MACHINE LEARNING OVER LARGE-SCALE NETWORKS

Frank Lin (16553082) 18 July 2023 (has links)
<p>The swift emergence and wide-ranging utilization of machine learning (ML) across various industries, including healthcare, transportation, and robotics, have underscored the escalating need for efficient, scalable, and privacy-preserving solutions. Recognizing this, we present an integrated examination of three novel frameworks, each addressing different aspects of distributed learning and privacy issues: Two Timescale Hybrid Federated Learning (TT-HF), Delay-Aware Federated Learning (DFL), and Differential Privacy Hierarchical Federated Learning (DP-HFL). TT-HF introduces a semi-decentralized architecture that combines device-to-server and device-to-device (D2D) communications. Devices execute multiple stochastic gradient descent iterations on their datasets and sporadically synchronize model parameters via D2D communications. A unique adaptive control algorithm optimizes step size, D2D communication rounds, and global aggregation period to minimize network resource utilization and achieve a sublinear convergence rate. TT-HF outperforms conventional FL approaches in terms of model accuracy, energy consumption, and resilience against outages. DFL focuses on enhancing distributed ML training efficiency by accounting for communication delays between edge and cloud. It also uses multiple stochastic gradient descent iterations and periodically consolidates model parameters via edge servers. The adaptive control algorithm for DFL mitigates energy consumption and edge-to-cloud latency, resulting in faster global model convergence, reduced resource consumption, and robustness against delays. Lastly, DP-HFL is introduced to combat privacy vulnerabilities in FL. Merging the benefits of FL and Hierarchical Differential Privacy (HDP), DP-HFL significantly reduces the need for differential privacy noise while maintaining model performance, exhibiting an optimal privacy-performance trade-off. Theoretical analysis under both convex and nonconvex loss functions confirms DP-HFL’s effectiveness regarding convergence speed, privacy performance trade-off, and potential performance enhancement with appropriate network configuration. In sum, the study thoroughly explores TT-HF, DFL, and DP-HFL, and their unique solutions to distributed learning challenges such as efficiency, latency, and privacy concerns. These advanced FL frameworks have considerable potential to further enable effective, efficient, and secure distributed learning.</p>

Page generated in 0.0951 seconds