Spelling suggestions: "subject:"design aptimization"" "subject:"design anoptimization""
391 |
Development and Implementation of Rotorcraft Preliminary Design Methodology using Multidisciplinary Design OptimizationKhalid, Adeel S. 14 November 2006 (has links)
A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. All the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design information from one discipline to another in a collaborative manner. Several disciplinary and system level optimization problems are solved. After all the constraints of a multidisciplinary problem have been satisfied and an optimal design has been obtained, it is compared with the initial baseline, using the earlier developed OEC, to measure the level of improvement achieved. Finally a digital preliminary design is proposed. The proposed design methodology provides an automated design framework, facilitates parallel design by removing disciplinary interdependency, current and updated information is made available to all disciplines at all times of the design through a central collaborative repository, overall design time is reduced and an optimized design is achieved.
|
392 |
Numerical Modeling and Characterization of Vertically Aligned Carbon Nanotube ArraysJoseph, Johnson 01 January 2013 (has links)
Since their discoveries, carbon nanotubes have been widely studied, but mostly in the forms of 1D individual carbon nanotube (CNT). From practical application point of view, it is highly desirable to produce carbon nanotubes in large scales. This has resulted in a new class of carbon nanotube material, called the vertically aligned carbon nanotube arrays (VA-CNTs). To date, our ability to design and model this complex material is still limited. The classical molecular mechanics methods used to model individual CNTs are not applicable to the modeling of VA-CNT structures due to the significant computational efforts required. This research is to develop efficient structural mechanics approaches to design, model and characterize the mechanical responses of the VA-CNTs. The structural beam and shell mechanics are generally applicable to the well aligned VA-CNTs prepared by template synthesis while the structural solid elements are more applicable to much complex, super-long VA-CNTs from template-free synthesis. VA-CNTs are also highly “tunable” from the structure standpoint. The architectures and geometric parameters of the VA-CNTs have been thoroughly examined, including tube configuration, tube diameter, tube height, nanotube array density, tube distribution pattern, among many other factors. Overall, the structural mechanics approaches are simple and robust methods for design and characterization of these novel carbon nanomaterials
|
393 |
Méthodes de modélisation statistique de la durée de vie des composants en génie électrique / Statistical methods for the lifespan modeling of electrical engineering componentsSalameh, Farah 07 November 2016 (has links)
La fiabilité constitue aujourd’hui un enjeu important dans le contexte du passage aux systèmes plus électriques dans des secteurs critiques tels que l’aéronautique, l’espace ou le nucléaire. Il s’agit de comprendre, de modéliser et de prédire les mécanismes de vieillissement susceptibles de conduire les composants à la défaillance et le système à la panne. L’étude des effets des contraintes opérationnelles sur la dégradation des composants est indispensable pour la prédiction de leur durée de vie. De nombreux modèles de durée de vie ont été développés dans la littérature dans le contexte du génie électrique. Cependant, ces modèles présentent des limitations car ils dépendent du matériau étudié et de ses propriétés physiques et se restreignent souvent à un ou deux facteurs de stress, sans intégrer les interactions pouvant exister entre ces facteurs. Cette thèse présente une nouvelle méthodologie pour la modélisation de la durée de vie des composants du génie électrique. Cette méthodologie est générale ; elle s’applique à différents composants sans a priori sur leurs propriétés physiques. Les modèles développés sont des modèles statistiques estimés sur la base de données expérimentales issues de tests de vieillissement accéléré où plusieurs types de stress sont considérés. Les modèles visent alors à étudier les effets des différents facteurs de stress ainsi que de leurs différentes interactions. Le nombre et la configuration des tests de vieillissement nécessaires à construire les modèles (bases d’apprentissage) sont optimisés de façon à minimiser le coût expérimental tout en maximisant la précision des modèles. Des points expérimentaux supplémentaires aléatoirement configurés sont réalisés pour valider les modèles (bases de test). Deux catégories de composants sont testées : deux types d’isolants couramment utilisés dans les machines électriques et des sources de lumière OLED. Différentes formes des modèles de durée de vie sont présentées : les modèles paramétriques, non paramétriques et les modèles hybrides. Tous les modèles développés sont évalués à l’aide de différents outils statistiques permettant, d’une part, d’étudier la pertinence des modèles et d’autre part, d’évaluer leur prédictibilité sur les points des bases de test. Les modèles paramétriques permettent de quantifier les effets des facteurs et de leurs interactions sur la durée de vie à partir d’une expression analytique prédéfinie. Un test statistique permet ensuite d’évaluer la significativité de chacun des paramètres inclus dans le modèle. Ces modèles sont caractérisés par une bonne qualité de prédiction sur leurs bases de test. La relation entre la durée de vie et les contraintes est également modélisée par les arbres de régression comme méthode alternative aux modèles paramétriques. Les arbres de régression sont des modèles non paramétriques qui permettent de classifier graphiquement les points expérimentaux en différentes zones dans lesquelles les contraintes sont hiérarchisées selon leurs effets sur la durée de vie. Ainsi, une relation simple, graphique, et directe entre la durée de vie et les contraintes est obtenue. Cependant, à la différence des modèles paramétriques continus sur le domaine expérimental étudié, les arbres de régression sont constants par morceaux, ce qui dégrade leur qualité de prédiction sur la base de test. Pour remédier à cet inconvénient, une troisième approche consiste à attribuer un modèle linéaire à chacune des zones identifiées avec les arbres de régression. Le modèle résultant, dit modèle hybride, est donc linéaire par morceaux et permet alors de raffiner les modèles paramétriques en évaluant les effets des facteurs dans chacune des zones tout en améliorant la qualité de prédiction des arbres de régression. / Reliability has become an important issue nowadays since the most critical industries such as aeronautics, space and nuclear are moving towards the design of more electrical based systems. The objective is to understand, model and predict the aging mechanisms that could lead to component and system failure. The study of the operational constraints effects on the degradation of the components is essential for the prediction of their lifetime. Numerous lifespan models have been developed in the literature in the field of electrical engineering. However, these models have some limitations: they depend on the studied material and its physical properties, they are often restricted to one or two stress factors and they do not integrate interactions that may exist between these factors. This thesis presents a new methodology for the lifespan modeling of electrical engineering components. This methodology is general; it is applicable to various components without prior information on their physical properties. The developed models are statistical models estimated on experimental data obtained from accelerated aging tests where several types of stress factors are considered. The models aim to study the effects of the different stress factors and their different interactions. The number and the configuration of the aging tests needed to construct the models (learning sets) are optimized in order to minimize the experimental cost while maximizing the accuracy of the models. Additional randomly configured experiments are carried out to validate the models (test sets). Two categories of components are tested: two types of insulation materials that are commonly used in electrical machines and OLED light sources. Different forms of lifespan models are presented: parametric, non-parametric and hybrid models. Models are evaluated using different statistical tools in order to study their relevance and to assess their predictability on the test set points. Parametric models allow to quantify the effects of stress factors and their interactions on the lifespan through a predefined analytical expression. Then a statistical test allows to assess the significance of each parameter in the model. These models show a good prediction quality on their test sets. The relationship between the lifespan and the constraints is also modeled by regression trees as an alternative method to parametric models. Regression trees are non-parametric models that graphically classify experimental points into different zones where the constraints are hierarchized according to their effects on the lifespan. Thus, a simple, graphic and direct relationship between the lifespan and the stress factors is obtained. However, unlike parametric models that are continuous in the studied experimental domain, regression trees are piecewise constant, which degrades their predictive quality with respect to parametric models. To overcome this disadvantage, a third approach consists in assigning a linear model to each of the zones identified with regression trees. The resulting model, called hybrid model, is piecewise linear. It allows to refine parametric models by evaluating the effects of the factors in each of the zones while improving the prediction quality of regression trees.
|
394 |
Contributions à l'étude des petites machines électriques à aimants permanents, à flux axial et à auto-commutation électronique / Contributions to the study of small electronically-commutated axial-flux permanent-magnet machinesPop, Adrian Augustin 15 December 2012 (has links)
Les travaux de recherche présentés dans cette thèse concernent les petites machines à aimants permanents, à flux axial et à auto-commutation électronique ayant la topologie d’un rotor intérieur discoïdal avec des aimants Nd-Fe-B montés en surface et de deux stators extérieurs identiques, chacun avec enroulement triphasé distribué dans des encoches. Après l’examen des topologies candidates pour applications d’entraînement direct basse-vitesse, une modélisation électromagnétique analytique de pré-dimensionnement d’un prototype de telles machines est réalisée. Ensuite, une approche numérique originale est développée et couplée à l’optimisation géométrique des aimants rotoriques en vue de réduire les harmoniques d’espace de l’induction magnétique dans l’entrefer et aussi les ondulations du couple électromagnétique. Des nombreux tests expérimentaux sont effectués sur le prototype de machine pour vérifier son dimensionnement ainsi que pour valider la stratégie d’auto-commutation électronique et de contrôle de base / The research work presented in this thesis is concerned with small electronically-commutated axial-flux permanent-magnet (AFPM) machines having the double-sided topology of an inner rotor with surface-mounted Nd-Fe-B magnets, sandwiched between two outer slotted stators with distributed three-phase windings. After reviewing the small double-sided AFPM machine candidate topologies for low-speed direct-drive applications, the thesis hinges on the size equations and the analytical electromagnetic design of the inner-rotor AFPM (AFIPM) machine topology under study. Original methods of modelling and design optimization of a small prototype AFIPM machine are then proposed with the view to reducing the airgap flux density space-harmonics and the torque ripple by rotor-PM shape modification. Extensive experimental tests are carried out on the small three-phase AFIPM machine prototype in order to validate its proper design and to check its electronic commutation and basic control technique
|
395 |
Méta-modèles adaptatifs pour l'analyse de fiabilité et l'optimisation sous contrainte fiabiliste / Adaptive surrogate models for reliability analysis and reliability-based design optimizationDubourg, Vincent 05 December 2011 (has links)
Cette thèse est une contribution à la résolution du problème d’optimisation sous contrainte de fiabilité. Cette méthode de dimensionnement probabiliste vise à prendre en compte les incertitudes inhérentes au système à concevoir, en vue de proposer des solutions optimales et sûres. Le niveau de sûreté est quantifié par une probabilité de défaillance. Le problème d’optimisation consiste alors à s’assurer que cette probabilité reste inférieure à un seuil fixé par les donneurs d’ordres. La résolution de ce problème nécessite un grand nombre d’appels à la fonction d’état-limite caractérisant le problème de fiabilité sous-jacent. Ainsi,cette méthodologie devient complexe à appliquer dès lors que le dimensionnement s’appuie sur un modèle numérique coûteux à évaluer (e.g. un modèle aux éléments finis). Dans ce contexte, ce manuscrit propose une stratégie basée sur la substitution adaptative de la fonction d’état-limite par un méta-modèle par Krigeage. On s’est particulièrement employé à quantifier, réduire et finalement éliminer l’erreur commise par l’utilisation de ce méta-modèle en lieu et place du modèle original. La méthodologie proposée est appliquée au dimensionnement des coques géométriquement imparfaites soumises au flambement. / This thesis is a contribution to the resolution of the reliability-based design optimization problem. This probabilistic design approach is aimed at considering the uncertainty attached to the system of interest in order to provide optimal and safe solutions. The safety level is quantified in the form of a probability of failure. Then, the optimization problem consists in ensuring that this failure probability remains less than a threshold specified by the stakeholders. The resolution of this problem requires a high number of calls to the limit-state design function underlying the reliability analysis. Hence it becomes cumbersome when the limit-state function involves an expensive-to-evaluate numerical model (e.g. a finite element model). In this context, this manuscript proposes a surrogate-based strategy where the limit-state function is progressively replaced by a Kriging meta-model. A special interest has been given to quantifying, reducing and eventually eliminating the error introduced by the use of this meta-model instead of the original model. The proposed methodology is applied to the design of geometrically imperfect shells prone to buckling.
|
396 |
Pravděpodobnostní optimalizace konstrukcí / Reliability-based structural optimizationSlowik, Ondřej January 2014 (has links)
This thesis presents the reader the importance of optimization and probabilistic assessment of structures for civil engineering problems. Chapter 2 further investigates the combination between previously proposed optimization techniques and probabilistic assessment in the form of optimization constraints. Academic software has been developed for the purposes of demonstrating the effectiveness of the suggested methods and their statistical testing. 3th chapter summarizes the results of testing previously described optimization method (called Aimed Multilevel Sampling), including a comparison with other optimization techniques. In the final part of the thesis, described procedures have been demonstrated on the selected optimization and reliability problems. The methods described in text represents engineering approach to optimization problems and aims to introduce a simple and transparent optimization algorithm, which could serve to the practical engineering purposes.
|
397 |
Reliability-Based Assessment and Optimization of High-Speed Railway BridgesAllahvirdizadeh, Reza January 2021 (has links)
Increasing the operational speed of trains has attracted a lot of interest in the last decades and has brought new challenges, especially in terms of infrastructure design methodology, as it may induce excessive vibrations. Such demands can damage bridges, which in turn increases maintenance costs, endangers the safety of passing trains and disrupts passenger comfort. Conventional design provisions should therefore be evaluated in the light of modern concerns; nevertheless, several previous studies have highlighted some of their shortcomings. It should be emphasized that most of these studies have neglected the uncertainties involved, which preventsthe reported results from representing a complete picture of the problem. In this respect, the present thesis is dedicated to evaluating the performance of conventional design methods, especially those related to running safety and passenger comfort, using probabilistic approaches. To achieve this objective, a preliminary study was carried out using the first-order reliability method for short/medium span bridges passed by trains at a wide range of operating speeds. Comparison of these results with the corresponding deterministic responses showed that applying a constant safety factor to the running safety threshold does not guarantee that the safety index will be identical for all bridges. It also shows that the conventional design approaches result in failure probabilities that are higher than the target values. This conclusion highlights the need to update the design methodology for running safety. However, it would be essential to determine whether running safety is the predominant design criterion before conducting further analysis. Therefore, a stochastic comparison between this criterion and passenger comfort was performed. Due to the significant computational cost of such investigations, subset simulation and crude Monte-Carlo (MC) simulation using meta-models based on polynomial chaos expansion were employed. Both methods were found to perform well, with running safety almost always dominating the passenger comfort limit state. Subsequently, classification-based meta-models, e.g. support vector machines, k-nearest neighbours and decision trees, were combined using ensemble techniques to investigate the influence of soil-structure interaction on the evaluated reliability of running safety. The obtained results showed a significant influence, highlighting the need for detailed investigations in further studies. Finally, a reliability-based design optimization was conducted to update the conventional design method of running safety by proposing minimum requirements for the mass per length and moment of inertia of bridges. It is worth mentioning that the inner loop of the method was solved by a crude MC simulation using adaptively trained Kriging meta-models. / Att öka tågens hastighet har väckt stort intresse under de senaste decennierna och har medfört nya utmaningar, särskilt när det gäller broanalyser, eftersom tågen inducerar stora vibrationer. Sådana vibrationer kan öka underhållskostnaderna, äventyra säkerheten för förbipasserande tåg och påverka passagerarkomforten. Konstruktionsbestämmelser bör därför utvärderas mot bakgrund av dessa problem; dock har flera tidigare studier belyst några av bristerna i dagens bestämmelser. Det bör understrykas att de flesta av dessa studier har försummat de osäkerheter som är involverade, vilket hindrar de rapporterade resultaten från att representera en fullständig bild av problemet. I detta avseende syftar denna avhandling till att utvärdera prestandan hos konventionella analysmetoder, särskilt de som rör körsäkerhet och passagerarkomfort, med hjälp av sannolikhetsmetoder. För att uppnå detta mål genomfördes en preliminär studie med första ordningens tillförlitlighetsnmetod för broar med kort/medellång spännvidd som passeras av tåg med ett brett hastighetsspektrum. Jämförelse av dessa resultat med motsvarande deterministiska respons visade att tillämpa en konstant säkerhetsfaktor för verifieringen av trafiksäkerhet inte garanterar att säkerhetsindexet kommer att vara identiskt för alla broar. Det visar också att de konventionella analysmetoderna resulterar i brottsannolikheter som är högre än målvärdena. Denna slutsats belyser behovet av att uppdatera analysmetoden för trafiksäkerhet. Det skulle emellertid vara viktigt att avgöra om trafiksäkerhet är det dominerande designkriteriet innan ytterligare analyser genomförs. Därför utfördes en stokastisk jämförelse mellan detta kriterium och kriteriet för passagerarkomfort. På grund av den betydande. analystiden för sådana beräkningar användes delmängdssimulering och Monte-Carlo (MC) simulering med metamodeller baserade på polynomisk kaosutvidgning. Båda metoderna visade sig fungera bra, med trafiksäkerhet som nästan alltid dominerade över gränsningstillståndet för passagerarkomfort. Därefter kombinerades klassificeringsbaserade metamodeller som stödvektormaskin och beslutsträd genom ensembletekniker, för att undersöka påverkan av jord-brointeraktion på den utvärderade tillförlitligheten gällande trafiksäkerhet. De erhållna resultaten visade en signifikant påverkan och betonade behovet av detaljerade undersökningar genom ytterligare studier. Slutligen genomfördes en tillförlitlighetsbaserad konstruktionsoptimering för att föreslå ett minimikrav på erforderlig bromassa per längdmeter och tröghetsmoment. Det är värt att nämna att metodens inre loop löstes med en MC-simulering med adaptivt tränade Kriging-metamodeller. / <p>QC 20210910</p>
|
398 |
Additive Manufacturing Applications for Suspension Systems : Part selection, concept development, and designWaagaard, Morgan, Persson, Johan January 2020 (has links)
This project was conducted as a case study at Öhlins Racing AB, a manufacturer of suspension systems for automotive applications. Öhlins usually manufacture their components by traditional methods such as forging, casting, and machining. The project aimed to investigate how applicable Additive Manufacturing (AM) is to manufacture products for suspension systems to add value to suspension system components. For this, a proof of concept was designed and manufactured. The thesis was conducted at Öhlins in Upplands Väsby via the consultant firm Combitech. A product catalog was searched, screened, and one part was selected. The selected part was used as a benchmark when a new part was designed for AM, using methods including Topology Optimization (TO) and Design for Additive Manufacturing (DfAM). Product requirements for the chosen part were to reduce weight, add functions, or add value in other ways. Methods used throughout the project were based on traditional product development and DfAM, and consisted of three steps: Product Screening, Concept Development, and Part Design. The re-designed part is ready to be manufactured in titanium by L-PBF at Amexci in Karlskoga. The thesis result shows that at least one of Öhlin's components in their product portfolio is suitable to be chosen, re-designed, and manufactured by AM. It is also shown that value can be added to the product by increased performance, in this case mainly by weight reduction. The finished product is a fork bottom, designed with hollow structures, and is ready to print by L-PBF in a titanium alloy.
|
399 |
Low Cost Fpga Based Digital Beamforming Architecture for Casa Weather Radar ApplicationsSeguin, Emmanuel J 01 January 2010 (has links) (PDF)
Digital beamforming is a powerful signal processing technique used in many communication and radar sensing applications. However, despite its many advantages, its high cost makes it a less popular choice than other directional antenna options. The development of a low cost architecture for digital beamforming would make it a more feasible option, allowing it to be used for a number of new applications. Specifically, the Collaborative, Adaptive Sensing of the Atmosphere (CASA) project’s Distributed Collaborative Adaptive Sensing (DCAS) system, a low cost weather radar system, could benefit from the incorporation of digital beamforming into small, inexpensive but highly functional radars. Existing DBF architectures are implemented in complex systems which include a number of expensive processing modules and other associated hardware. This project shows a low-cost digital beamforming architecture that has been developed by utilizing today’s powerful and inexpensive FPGA devices along with recently available low-voltage-differential-signaling enabled multi-channel analog to digital conversion hardware. The utilization of commercially available devices rather than custom hardware allows this architecture to be manufactured at a fraction of the cost of most. This makes it a viable alternative to the classic dish antennas for the DCAS system, allowing a reduction in size and cost which will benefit deployment. The flexibility of an FPGA-based DBF system will result in a more robust radar system. With this in mind, an architecture has been developed, fabricated and evaluated.
|
400 |
Design, Control, and Validation of a Transient Thermal Management System with Integrated Phase-Change Thermal Energy StorageMichael Alexander Shanks (14216549) 06 December 2022 (has links)
<p>An emerging technology in the field of transient thermal management is thermal energy storage, or TES, which enables temporary, on-demand heat rejection via storage as latent heat in a phase-change material. Latent TES devices have enabled advances in many thermal management applications, including peak load shifting for reducing energy demand and cost of HVAC systems and providing supplemental heat rejection in transient thermal management systems. However, the design of a transient thermal management system with integrated storage comprises many challenges which are yet to be solved. For example, design approaches and performance metrics for determining the optimal dimensions of the TES device have only recently been studied. Another area of active research is estimation of the internal temperature state of the device, which can be difficult to directly measure given the transient nature of the thermal storage process. Furthermore, in contrast to the three main functions of a thermal-fluid system--heat addition, thermal transport, and heat rejection--thermal storage introduces the need for active, real-time control and automated decision making for managing the operation of the thermal storage device. </p>
<p>In this thesis, I present the design process for integrating thermal energy storage into a single-phase thermal management system for rejecting transient heat loads, including design of the TES device, state estimation and control algorithm design, and validation in both simulation and experimental environments. Leveraging a reduced-order finite volume simulation model of a plate-fin TES device, I develop a design approach which involves a transient simulation-based design optimization to determine the required geometric dimensions of the device to meet transient performance objectives while maximizing power density. The optimized TES device is integrated into a single-phase thermal-fluid testbed for experimental testing. Using the finite volume model and feedback from thermocouples embedded in the device, I design and experimentally validate a state estimator based on the state-dependent Riccati equation approach for determining the internal temperature distribution to a high degree of accuracy. Real-time knowledge of the internal temperature state is critical for making control decisions; to manage the operation of the TES device in the context of a transient thermal management system, I design and test, both in simulation and experimentally, a logic-based control strategy that uses fluid temperature measurements and estimates of the TES state to make real-time control decisions to meet critical thermal management objectives. Together, these advances demonstrate the potential of thermal energy storage technology as a component of thermal management systems and the feasibility of logic-based control strategies for real-time control of thermal management objectives.</p>
|
Page generated in 0.215 seconds