• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 20
  • 14
  • 13
  • 10
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 402
  • 402
  • 185
  • 177
  • 104
  • 70
  • 52
  • 49
  • 46
  • 42
  • 40
  • 39
  • 36
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

L'électrophysiologie temps-réel en neuroscience cognitive : vers des paradigmes adaptatifs pour l'étude de l'apprentissage et de la prise de décision perceptive chez l'homme / Real-time electrophysiology in cognitive neuroscience : towards adaptive paradigms to study perceptual learning and decision making in humans

Sanchez, Gaëtan 27 June 2014 (has links)
Aujourd’hui, les modèles computationnels de l'apprentissage et de la prise de décision chez l'homme se sont raffinés et complexifiés pour prendre la forme de modèles génératifs des données psychophysiologiques de plus en plus réalistes d’un point de vue neurobiologique et biophysique. Dans le même temps, le nouveau champ de recherche des interfaces cerveau-machine (ICM) s’est développé de manière exponentielle. L'objectif principal de cette thèse était d'explorer comment le paradigme de l'électrophysiologie temps-réel peut contribuer à élucider les processus d'apprentissage et de prise de décision perceptive chez l’homme. Au niveau expérimental, j'ai étudié les décisions perceptives somatosensorielles grâce à des tâches de discrimination de fréquence tactile. En particulier, j'ai montré comment un contexte sensoriel implicite peut influencer nos décisions. Grâce à la magnétoencéphalographie (MEG), j'ai pu étudier les mécanismes neuronaux qui sous-tendent cette adaptation perceptive. L’ensemble de ces résultats renforce l'hypothèse de la construction implicite d’un a priori ou d'une référence interne au cours de l'expérience. Aux niveaux théoriques et méthodologiques, j'ai proposé une vue générique de la façon dont l'électrophysiologie temps-réel pourrait être utilisée pour optimiser les tests d'hypothèses, en adaptant le dessin expérimental en ligne. J'ai pu fournir une première validation de cette démarche adaptative pour maximiser l'efficacité du dessin expérimental au niveau individuel. Ce travail révèle des perspectives en neurosciences fondamentales et cliniques ainsi que pour les ICM / Today, psychological as well as physiological models of perceptual learning and decision-making processes have recently become more biologically plausible, leading to more realistic (and more complex) generative models of psychophysiological observations. In parallel, the young but exponentially growing field of Brain-Computer Interfaces (BCI) provides new tools and methods to analyze (mostly) electrophysiological data online. The main objective of this PhD thesis was to explore how the BCI paradigm could help for a better understanding of perceptual learning and decision making processes in humans. At the empirical level, I studied decisions based on tactile stimuli, namely somatosensory frequency discrimination. More specifically, I showed how an implicit sensory context biases our decisions. Using magnetoencephalography (MEG), I was able to decipher some of the neural correlates of those perceptual adaptive mechanisms. These findings support the hypothesis that an internal perceptual-reference builds up along the course of the experiment. At the theoretical and methodological levels, I propose a generic view and method of how real-time electrophysiology could be used to optimize hypothesis testing, by adapting the experimental design online. I demonstrated the validity of this online adaptive design optimization (ADO) approach to maximize design efficiency at the individual level. I also discussed the implications of this work for basic and clinical neuroscience as well as BCI itself
342

Otimização multidisciplinar em projeto de asas flexíveis utilizando metamodelos / Multidisciplinary design optimization of flexible wings using metamodels

Caixeta Júnior, Paulo Roberto 11 August 2011 (has links)
A Otimização Multidisciplinar em Projeto (em inglês, Multidisciplinary Design Optimization - MDO) é uma ferramenta de projeto importante e versátil e seu uso está se expandindo em diversos campos da engenharia. O foco desta metodologia é unir disciplinas envolvidas no projeto para que trabalhem suas variáveis concomitantemente em um ambiente de otimização, para obter soluções melhores. É possível utilizar MDO em qualquer fase do projeto, seja a fase conceitual, preliminar ou detalhada, desde que os modelos numéricos sejam ajustados às necessidades de cada uma delas. Este trabalho descreve o desenvolvimento de um código de MDO para o projeto conceitual de asas flexíveis de aeronaves, com restrição quanto ao fenômeno denominado flutter. Como uma ferramenta para o projetista na fase conceitual, os modelos numéricos devem ser razoavelmente precisos e rápidos. O intuito deste estudo é analisar o uso de metamodelos para a previsão do flutter de asas de aeronaves no código de MDO, ao invés de um modelo convencional, o que pode alterar significativamente o custo computacional da otimização. Para este fim são avaliados três técnicas diferentes de metamodelagem, que foram escolhidas por representarem duas classes básicas de metamodelos, a classe de métodos de interpolação e a de métodos de aproximação. Para representá-las foram escolhidos o método de interpolação por funções de base radial e o método de redes neurais artificiais, respectivamente. O terceiro método, que é considerado um método híbrido dos dois anteriores, é chamado de redes neurais por funções de bases radiais e é uma tentativa de acoplar as características de ambos em um único metamodelo. Os metamodelos são preparados utilizando um código para solução aeroelástica baseado no método dos elementos finitos acoplado com um modelo aerodinâmico linear de faixas. São apresentados resultados de desempenho dos três metamodelos, de onde se pode notar que a rede neural artificial é a mais adequada para previsão de flutter. O processo de MDO é realizado com o uso de um algoritmo genético multi-objetivo baseado em não-dominância, cujos objetivos são a maximização da velocidade crítica de flutter e a minimização da massa estrutural. Dois estudos de caso são apresentados para avaliar o desempenho do código de MDO, revelando que o processo global de otimização realiza de fato a busca pela fronteira de Pareto. / The Multidisciplinary Design Optimization, MDO, is an important and versatile design tool and its use is spreading out in several fields of engineering. The focus of this methodology is to put together disciplines involved with the design to work all their variables concomitantly, at an optimization environment to obtain better solutions. It is possible to use MDO in any stage of the design process, that is in the conceptual, preliminary or detailed design, as long as the numerical models are fitted to the needs of each of these stages. This work describes the development of a MDO code for the conceptual design of flexible aircraft wings, with restrictions regarding the phenomenon called flutter. As a tool for the designer at the conceptual stage, the numerical models must be fairly accurate and fast. The aim of this study is to analyze the use of metamodels for the flutter prediction of aircraft wings in the MDO code, instead of a conventional model itself, what may affect significantly the computational cost of the optimization. For this purpose, three different metamodeling techniques have been evaluated, representing two basic metamodel classes, that are, the interpolation and the approximation class. These classes are represented by the radial basis function interpolation method and the artificial neural networks method, respectively. The third method, which is considered as a hybrid of the other two, is called radial basis function neural networks and is an attempt of coupling the features of both in single code. Metamodels are prepared using an aeroelastic code based on finite element model coupled with linear aerodynamics. Results of the three metamodels performance are presented, from where one can note that the artificial neural network is best suited for flutter prediction. The MDO process is achieved using a non-dominance based multi-objective genetic algorithm, whose objectives are the maximization of critical flutter speed and minimization of structural mass. Two case studies are presented to evaluate the performance of the MDO code, revealing that overall optimization process actually performs the search for the Pareto frontier.
343

OMPP para projeto conceitual de aeronaves, baseado em heurísticas evolucionárias e de tomadas de decisões / OMPP for conceptual design of aircraft based on evolutionary heuristics and decision making

Abdalla, Alvaro Martins 30 October 2009 (has links)
Este trabalho consiste no desenvolvimento de uma metodologia de otimização multidisciplinar de projeto conceitual de aeronaves. O conceito de aeronave otimizada tem como base o estudo evolutivo de características das categorias imediatas àquela que se propõe. Como estudo de caso, foi otimizada uma aeronave de treinamento militar que faça a correta transição entre as fases de treinamento básico e avançado. Para o estabelecimento dos parâmetros conceituais esse trabalho integra técnicas de entropia estatística, desdobramento da função de qualidade (QFD), aritmética fuzzy e algoritmo genético (GA) à aplicação de otimização multidisciplinar ponderada de projeto (OMPP) como metodologia de projeto conceitual de aeronaves. Essa metodologia reduz o tempo e o custo de projeto quando comparada com as técnicas tradicionais existentes. / This work is concerned with the development of a methodology for multidisciplinary optimization of the aircraft conceptual design. The aircraft conceptual design optimization was based on the evolutionary simulation of the aircraft characteristics outlined by a QFD/Fuzzy arithmetic approach where the candidates in the Pareto front are selected within categories close to the target proposed. As a test case a military trainer aircraft was designed target to perform the proper transition from basic to advanced training. The methodology for conceptual aircraft design optimization implemented in this work consisted on the integration of techniques such statistical entropy, quality function deployment (QFD), arithmetic fuzzy and genetic algorithm (GA) to the weighted multidisciplinary design optimization (WMDO). This methodology proved to be objective and well balanced when compared with traditional design techniques.
344

Otimização robusta multiobjetivo por análise de intervalo não probabilística : uma aplicação em conforto e segurança veicular sob dinâmica lateral e vertical acoplada

Drehmer, Luis Roberto Centeno January 2017 (has links)
Esta Tese propõe uma nova ferramenta para Otimização Robusta Multiobjetivo por Análise de Intervalo Não Probabilística (Non-probabilistic Interval Analysis for Multiobjective Robust Design Optimization ou NPIA-MORDO). A ferramenta desenvolvida visa à otimização dos parâmetros concentrados de suspensão em um modelo veicular completo, submetido a uma manobra direcional percorrendo diferentes perfis de pista, a fim de garantir maior conforto e segurança ao motorista. O modelo multicorpo possui 15 graus de liberdade (15-GDL), dentre os quais onze pertencem ao veículo e assento, e quatro, ao modelo biodinâmico do motorista. A função multiobjetivo é composta por objetivos conflitantes e as suas tolerâncias, como a raiz do valor quadrático médio (root mean square ou RMS) da aceleração lateral e da aceleração vertical do assento do motorista, desenvolvidas durante a manobra de dupla troca de faixa (Double Lane Change ou DLC). O curso da suspensão e a aderência dos pneus à pista são tratados como restrições do problema de otimização. As incertezas são quantificadas no comportamento do sistema pela análise de intervalo não probabilística, por intermédio do Método dos Níveis de Corte-α (α-Cut Levels) para o nível α zero (de maior dispersão), e realizada concomitantemente ao processo de otimização multiobjetivo. Essas incertezas são aplicáveis tanto nos parâmetros do problema quanto nas variáveis de projeto. Para fins de validação do modelo, desenvolvido em ambiente MATLAB®, a trajetória do centro de gravidade da carroceria durante a manobra é comparada com o software CARSIM®, assim como as forças laterais e verticais dos pneus. Os resultados obtidos são exibidos em diversos gráficos a partir da fronteira de Pareto entre os múltiplos objetivos do modelo avaliado Os indivíduos da fronteira de Pareto satisfazem as condições do problema, e a função multiobjetivo obtida pela agregação dos múltiplos objetivos resulta em uma diferença de 1,66% entre os indivíduos com o menor e o maior valor agregado obtido. A partir das variáveis de projeto do melhor indivíduo da fronteira, gráficos são gerados para cada grau de liberdade do modelo, ilustrando o histórico dos deslocamentos, velocidades e acelerações. Para esse caso, a aceleração RMS vertical no assento do motorista é de 1,041 m/s² e a sua tolerância é de 0,631 m/s². Já a aceleração RMS lateral no assento do motorista é de 1,908 m/s² e a sua tolerância é de 0,168 m/s². Os resultados obtidos pelo NPIA-MORDO confirmam que é possível agregar as incertezas dos parâmetros e das variáveis de projeto à medida que se realiza a otimização externa, evitando a necessidade de análises posteriores de propagação de incertezas. A análise de intervalo não probabilística empregada pela ferramenta é uma alternativa viável de medida de dispersão se comparada com o desvio padrão, por não utilizar uma função de distribuição de probabilidades prévia e por aproximar-se da realidade na indústria automotiva, onde as tolerâncias são preferencialmente utilizadas. / This thesis proposes the development of a new tool for Non-probabilistic Interval Analysis for Multi-objective Robust Design Optimization (NPIA-MORDO). The developed tool aims at optimizing the lumped parameters of suspension in a full vehicle model, subjected to a double-lane change (DLC) maneuver throughout different random road profiles, to ensure comfort and safety to the driver. The multi-body model has 15 degrees of freedom (15-DOF) where 11-DOF represents the vehicle and its seat and 4-DOF represents the driver's biodynamic model. A multi-objective function is composed by conflicted objectives and their tolerances, like the root mean square (RMS) lateral and vertical acceleration in the driver’s seat, both generated during the double-lane change maneuver. The suspension working space and the road holding capacity are used as constraints for the optimization problem. On the other hand, the uncertainties in the system are quantified using a non-probabilistic interval analysis with the α-Cut Levels Method for zero α-level (the most uncertainty one), performed concurrently in the multi-objective optimization process. These uncertainties are both applied to the system parameters and design variables to ensure the robustness in results. For purposes of validation in the model, developed in MATLAB®, the path of the car’s body center of gravity during the maneuver is compared with the commercial software CARSIM®, as well as the lateral and vertical forces from the tires. The results are showed in many graphics obtained from the Pareto front between the multiple conflicting objectives of the evaluated model. The obtained solutions from the Pareto Front satisfy the conditions of the evaluated problem, and the aggregated multi-objective function results in a difference of 1.66% for the worst to the best solution. From the design variables of the best solution choose from the Pareto front, graphics are created for each degree of freedom, showing the time histories for displacements, velocities and accelerations. In this particular case, the RMS vertical acceleration in the driver’s seat is 1.041 m/s² and its tolerance is 0.631 m/s², but the RMS lateral acceleration in the driver’s seat is 1.908 m/s² and its tolerance is 0.168 m/s². The overall results obtained from NPIA-MORDO assure that is possible take into account the uncertainties from the system parameters and design variables as the external optimization loop is performed, reducing the efforts in subsequent evaluations. The non-probabilistic interval analysis performed by the proposed tool is a feasible choice to evaluate the uncertainty if compared to the standard deviation, because there is no need of previous well-known based probability distribution and because it reaches the practical needs from the automotive industry, where the tolerances are preferable.
345

A Framework for the Determination of Weak Pareto Frontier Solutions under Probabilistic Constraints

Ran, Hongjun 09 April 2007 (has links)
A framework is proposed that combines separately developed multidisciplinary optimization, multi-objective optimization, and joint probability assessment methods together but in a decoupled way, to solve joint probabilistic constraint, multi-objective, multidisciplinary optimization problems that are representative of realistic conceptual design problems of design alternative generation and selection. The intent here is to find the Weak Pareto Frontier (WPF) solutions that include additional compromised solutions besides the ones identified by a conventional Pareto frontier. This framework starts with constructing fast and accurate surrogate models of different disciplinary analyses. A new hybrid method is formed that consists of the second order Response Surface Methodology (RSM) and the Support Vector Regression (SVR) method. The three parameters needed by SVR to be pre-specified are automatically selected using a modified information criterion based on model fitting error, predicting error, and model complexity information. The model predicting error is estimated inexpensively with a new method called Random Cross Validation. This modified information criterion is also used to select the best surrogate model for a given problem out of the RSM, SVR, and the hybrid methods. A new neighborhood search method based on Monte Carlo simulation is proposed to find valid designs that satisfy the deterministic constraints and are consistent for the coupling variables featured in a multidisciplinary design problem, and at the same time decouple the three loops required by the multidisciplinary, multi-objective, and probabilistic features. Two schemes have been developed. One scheme finds the WPF by finding a large enough number of valid design solutions such that some WPF solutions are included in those valid solutions. Another scheme finds the WPF by directly finding the WPF of each consistent design zone. Then the probabilities of the PCs are estimated, and the WPF and corresponding design solutions are found. Various examples demonstrate the feasibility of this framework.
346

Towards multidisciplinary design optimization capability of horizontal axis wind turbines

McWilliam, Michael Kenneth 13 August 2015 (has links)
Research into advanced wind turbine design has shown that load alleviation strategies like bend-twist coupled blades and coned rotors could reduce costs. However these strategies are based on nonlinear aero-structural dynamics providing additional benefits to components beyond the blades. These innovations will require Multi-disciplinary Design Optimization (MDO) to realize the full benefits. This research expands the MDO capabilities of Horizontal Axis Wind Turbines. The early research explored the numerical stability properties of Blade Element Momentum (BEM) models. Then developed a provincial scale wind farm siting models to help engineers determine the optimal design parameters. The main focus of this research was to incorporate advanced analysis tools into an aero-elastic optimization framework. To adequately explore advanced designs with optimization, a new set of medium fidelity analysis tools is required. These tools need to resolve more of the physics than conventional tools like (BEM) models and linear beams, while being faster than high fidelity techniques like grid based computational fluid dynamics and shell and brick based finite element models. Nonlinear beam models based on Geometrically Exact Beam Theory (GEBT) and Variational Asymptotic Beam Section Analysis (VABS) can resolve the effects of flexible structures with anisotropic material properties. Lagrangian Vortex Dynamics (LVD) can resolve the aerodynamic effects of novel blade curvature. Initially this research focused on the structural optimization capabilities. First, it developed adjoint-based gradients for the coupled GEBT and VABS analysis. Second, it developed a composite lay-up parameterization scheme based on manufacturing processes. The most significant challenge was obtaining aero-elastic optimization solutions in the presence of erroneous gradients. The errors are due to poor convergence properties of conventional LVD. This thesis presents a new LVD formulation based on the Finite Element Method (FEM) that defines an objective convergence metric and analytic gradients. By adopting the same formulation used in structural models, this aerodynamic model can be solved simultaneously in aero-structural simulations. The FEM-based LVD model is affected by singularities, but there are strategies to overcome these problems. This research successfully demonstrates the FEM-based LVD model in aero-elastic design optimization. / Graduate / 0548 / pilot.mm@gmail.com
347

Metamodeling strategies for high-dimensional simulation-based design problems

Shan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
348

Kinematics and Optimal Control of a Mobile Parallel Robot for Inspection of Pipe-like Environments

Sarfraz, Hassan 24 January 2014 (has links)
The objective of this thesis is to analyze the kinematics of a mobile parallel robot with contribution that pertain to the singularity analysis, the optimization of geometric parameters and the optimal control to avoid singularities when navigating across singular geometric configurations. The analysis of the workspace and singularities is performed in a prescribed reference workspace regions using discretization method. Serial and parallel singularities are analytically analyzed and all possible singular configurations are presented. Kinematic conditioning index is used to determine the robot’s proximity to a singular configuration. A method for the determination of a continuous and singularity-free workspace is detailed. The geometric parameters of the system are optimized in various types of pipe-like structures with respect to a suitable singularity index, in order to avoid singularities during the navigation across elbows. The optimization problem is formulated with an objective to maximize the reachable workspace and minimize the singularities. The objective function is also subjected to constraints such as collision avoidance, singularity avoidance, workspace continuity and contact constraints imposed between the boundaries and the wheels of the robot. A parametric variation method is used as a technique to optimize the design parameters. The optimal design parameters found are normalized with respect to the width of the pipe-like structures and therefore the results are generalized to be used in the development phase of the robot. An optimal control to generate singularity-free trajectories when the robotic device has to cross a geometric singularity in a sharp 90◦ elbow is proposed. Such geometric singularity inherently leads to singularities in the Jacobian of the system, and therefore a modified device with augmented number of degrees of freedom is introduced to be able to generate non-singular trajectories.
349

Metamodeling strategies for high-dimensional simulation-based design problems

Shan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
350

A systematic approach to design for lifelong aircraft evolution

Lim, Dongwook 06 April 2009 (has links)
Modern aerospace systems rely heavily on legacy platforms and their derivatives. Historical examples show that after a vehicle design is frozen and delivered to a customer, successive upgrades are often made to fulfill changing requirements. Current practices of adapting to emerging needs with derivative designs, retrofits, and upgrades are often reactive and ad-hoc, resulting in performance and cost penalties. Recent DoD acquisition policies have addressed this problem by establishing a general paradigm for design for lifelong evolution. However, there is a need for a unified, practical design approach that considers the lifetime evolution of an aircraft concept by incorporating future requirements and technologies. This research proposes a systematic approach with which the decision makers can evaluate the value and risk of a new aircraft development program, including potential derivative development opportunities. The proposed Evaluation of Lifelong Vehicle Evolution (EvoLVE) method is a two- or multi-stage representation of the aircraft design process that accommodates initial development phases as well as follow-on phases. One of the key elements of this method is the Stochastic Programming with Recourse (SPR) technique, which accounts for uncertainties associated with future requirements. The remedial approach of SPR in its two distinctive problem-solving steps is well suited to aircraft design problems where derivatives, retrofits, and upgrades have been used to fix designs that were once but no longer optimal. The solution approach of SPR is complemented by the Risk-Averse Strategy Selection (RASS) technique to gauge risk associated with vehicle evolution options. In the absence of a full description of the random space, a scenario-based approach captures the randomness with a few probable scenarios and reveals implications of different future events. Last, an interactive framework for decision-making support allows simultaneous navigation of the current and future design space with a greater degree of freedom. A cantilevered beam design problem was set up and solved using the SPR technique to showcase its application to an engineering design setting. The full EvoLVE method was conducted on a notional multi-role fighter based on the F/A-18 Hornet.

Page generated in 0.1301 seconds