• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 14
  • 13
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Variational Inference for Data-driven Stochastic Programming

Prateek Jaiswal (11210091) 30 July 2021 (has links)
<div>Stochastic programs are standard models for decision-making under uncertainty and have been extensively studied in the operations research literature. In general, stochastic programming involves minimizing an expected cost function, where the expectation is with respect to fully specified stochastic models that quantify the aleatoric or `inherent' uncertainty in the decision-making problem. In practice, however, the stochastic models are unknown but can be estimated from data, introducing an additional epistemic uncertainty into the decision-making problem. The Bayesian framework provides a coherent way to quantify the epistemic uncertainty through the posterior distribution by combining prior beliefs of the decision-makers with the observed data. Bayesian methods have been used for data-driven decision-making in various applications such as inventory management, portfolio design, machine learning, optimal scheduling, and staffing, etc.</div><div> </div><div>Bayesian methods are challenging to implement, mainly due to the fact that the posterior is computationally intractable, necessitating the computation of approximate posteriors. Broadly speaking, there are two methods in the literature implementing approximate posterior inference. First are sampling-based methods such as Markov Chain Monte Carlo. Sampling-based methods are theoretically well understood, but they suffer from various issues like high variance, poor scalability to high-dimensional problems, and have complex diagnostics. Consequently, we propose to use optimization-based methods collectively known as variational inference (VI) that use information projections to compute an approximation to the posterior. Empirical studies have shown that VI methods are computationally faster and easily scalable to higher-dimensional problems and large datasets. However, the theoretical guarantees of these methods are not well understood. Moreover, VI methods are empirically and theoretically less explored in the decision-theoretic setting.</div><div><br></div><div> In this thesis, we first propose a novel VI framework for risk-sensitive data-driven decision-making, which we call risk-sensitive variational Bayes (RSVB). In RSVB, we jointly compute a risk-sensitive approximation to the `true' posterior and the optimal decision by solving a minimax optimization problem. The RSVB framework includes the naive approach of first computing a VI approximation to the true posterior and then using it in place of the true posterior for decision-making. We show that the RSVB approximate posterior and the corresponding optimal value and decision rules are asymptotically consistent, and we also compute their rate of convergence. We illustrate our theoretical findings in both parametric as well as nonparametric setting with the help of three examples: the single and multi-product newsvendor model and Gaussian process classification. Second, we present the Bayesian joint chance-constrained stochastic program (BJCCP) for modeling decision-making problems with epistemically uncertain constraints. We discover that using VI methods for posterior approximation can ensure the convexity of the feasible set in (BJCCP) unlike any sampling-based methods and thus propose a VI approximation for (BJCCP). We also show that the optimal value computed using the VI approximation of (BJCCP) are statistically consistent. Moreover, we derive the rate of convergence of the optimal value and compute the rate at which a VI approximate solution of (BJCCP) is feasible under the true constraints. We demonstrate the utility of our approach on an optimal staffing problem for an M/M/c queue. Finally, this thesis also contributes to the growing literature in understanding statistical performance of VI methods. In particular, we establish the frequentist consistency of an approximate posterior computed using a well known VI method that computes an approximation to the posterior distribution by minimizing the Renyi divergence from the ‘true’ posterior.</div>
12

Robust and stochastic MPC of uncertain-parameter systems

Fleming, James January 2016 (has links)
Constraint handling is difficult in model predictive control (MPC) of linear differential inclusions (LDIs) and linear parameter varying (LPV) systems. The designer is faced with a choice of using conservative bounds that may give poor performance, or accurate ones that require heavy online computation. This thesis presents a framework to achieve a more flexible trade-off between these two extremes by using a state tube, a sequence of parametrised polyhedra that is guaranteed to contain the future state. To define controllers using a tube, one must ensure that the polyhedra are a sub-set of the region defined by constraints. Necessary and sufficient conditions for these subset relations follow from duality theory, and it is possible to apply these conditions to constrain predicted system states and inputs with only a little conservatism. This leads to a general method of MPC design for uncertain-parameter systems. The resulting controllers have strong theoretical properties, can be implemented using standard algorithms and outperform existing techniques. Crucially, the online optimisation used in the controller is a convex problem with a number of constraints and variables that increases only linearly with the length of the prediction horizon. This holds true for both LDI and LPV systems. For the latter it is possible to optimise over a class of gain-scheduled control policies to improve performance, with a similar linear increase in problem size. The framework extends to stochastic LDIs with chance constraints, for which there are efficient suboptimal methods using online sampling. Sample approximations of chance constraint-admissible sets are generally not positively invariant, which motivates the novel concept of ‘sample-admissible' sets with this property to ensure recursive feasibility when using sampling methods. The thesis concludes by introducing a simple, convex alternative to chance-constrained MPC that applies a robust bound to the time average of constraint violations in closed-loop.
13

Approche novatrice pour la conception et l’exploitation d’avions écologiques / Innovative and integrated approach for environmentally efficient aircraft design and operations

Prigent, Sylvain 17 September 2015 (has links)
L'objectif de ce travail de thèse est de poser, d'analyser et de résoudre le problème multidisciplinaire et multi-objectif de la conception d'avions plus écologiques et plus économiques. Dans ce but, les principaux drivers de l'optimisation des performances d'un avion seront: la géométrie de l'avion, son moteur ainsi que son profil de mission, autrement dit sa trajectoire. Les objectifs à minimiser considérés sont la consommation de carburant, l'impact climatique et le coût d'opération de l'avion. L'étude sera axée sur la stratégie de recherche de compromis entre ces objectifs, afin d'identifier les configurations d'avions optimales selon le critère sélectionné et de proposer une analyse de ces résultats. L'incertitude présente au niveau des modèles utilisés sera prise en compte par des méthodes rigoureusement sélectionnées. Une configuration d'avion hybride est proposée pour atteindre l'objectif de réduction d'impact climatique. / The objective of this PhD work is to pose, investigate, and solve the highly multidisciplinary and multiobjective problem of environmentally efficient aircraft design and operation. In this purpose, the main three drivers for optimizing the environmental performance of an aircraft are the airframe, the engine, and the mission profiles. The figures of merit, which will be considered for optimization, are fuel burn, local emissions, global emissions, and climate impact (noise excluded). The study will be focused on finding efficient compromise strategies and identifying the most powerful design architectures and design driver combinations for improvement of environmental performances. The modeling uncertainty will be considered thanks to rigorously selected methods. A hybrid aircraft configuration is proposed to reach the climatic impact reduction objective.
14

Otimização sob restrições probabilísticas: teoria e aplicações

Araújo, Julyana Kelly Tavares de 30 December 2012 (has links)
Made available in DSpace on 2015-05-08T14:53:29Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1355369 bytes, checksum: 9c8287916a30feac7e9a3d355e472d28 (MD5) Previous issue date: 2012-12-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This Project brings a Chance Constrained Programming substantial approaching (CCP). This kind of optimization is used to pattern uncertainties and became useful to all kind of knowledge areas. The project main idea was to show CCP s theories and beyond this to present some applications on Engineering and Public Politics areas. It is noteworthy to say that this tool is pretty important for the production systems because of its uncertainties process. So after showing the theory whose purpose is to comprehend the Chance Constrained Programming, this subject commits itself to apply such technique in Emergency Medical Care Production Services (SAMU) in João Pessoa using the proposed model from Beraldi et al. (2004). This application was really useful to define the necessary ambulances to supply João Pessoa s city as well as the local they must be. However, to understand this technique and also work with it it s necessary a previous knowledge of Statistics, Applied Mathematics and Computing. Therefore, this work emphasizes the continuous and discreet random variables, as well as the probabilistic functions and concepts. In Applied Mathematics, this work brings a Linear Optimization, Facility Location and log concave functions. Concerning to computing, it was used MATLAB R007, Google Maps and CPLEX to provide the model. The great benefit of using CCP is that it offers possible solutions to the person who chooses between them, according to the reality. / Este trabalho apresenta uma abordagem de Otimização Probabilística (OP). Esse tipo de Otimização é utilizada para modelar incertezas e se tornou útil em diversas áreas do conhecimento. O objetivo principal deste trabalho foi apresentar a teoria de OP e, além disso, expor algumas aplicações nas áreas de Engenharia e Políticas Públicas. Vale ressaltar que tal ferramenta é muito interessante para Sistemas de Produção por existir incertezas inerentes ao processo. Assim, depois de apresentada tal teoria, com o intuito de melhor compreender a melhor a ferramenta de OP, este trabalho, se propôs a aplicar tal técnica no Sistema de Produção dos Serviços de Atendimento Médico de Urgência (SAMU) da cidade João Pessoa usando o Modelo proposto por Beraldi et al.(2004). A aplicação serviu para definir a quantidade de ambulâncias necessárias para atender a demanda de João Pessoa, assim como os possíveis locais que as mesmas devem estar posicionadas. No entanto, para entender melhor sobre essa técnica e trabalhar com a mesma, é necessário um conhecimento prévio de Estatística, Matemática Aplicada e Computação. Portanto, este trabalho aborda as variáveis aleatórias discretas e contínuas, bem como conceitos de Funções de Probabilidade. Na parte da Matemática Aplicada, este trabalho aborda conceitos de Otimização Linear, Facility Location e funções log. côncavas. Quanto à computação foi utilizado MATLAB R007,Google Maps e CPLEX para realizar a aplicação do Modelo. A grande vantagem da utilização de OP é que a mesma oferece soluções viáveis cujo tomador de decisão tem a opção de escolher qual a melhor solução de acordo com sua realidade.
15

Advanced Decomposition Methods in Stochastic Convex Optimization / Advanced Decomposition Methods in Stochastic Convex Optimization

Kůdela, Jakub Unknown Date (has links)
Při práci s úlohami stochastického programování se často setkáváme s optimalizačními problémy, které jsou příliš rozsáhlé na to, aby byly zpracovány pomocí rutinních metod matematického programování. Nicméně, v některých případech mají tyto problémy vhodnou strukturu, umožňující použití specializovaných dekompozičních metod, které lze použít při řešení rozsáhlých optimalizačních problémů. Tato práce se zabývá dvěma třídami úloh stochastického programování, které mají speciální strukturu, a to dvoustupňovými stochastickými úlohami a úlohami s pravděpodobnostním omezením, a pokročilými dekompozičními metodami, které lze použít k řešení problému v těchto dvou třídách. V práci popisujeme novou metodu pro tvorbu “warm-start” řezů pro metodu zvanou “Generalized Benders Decomposition”, která se používá při řešení dvoustupňových stochastických problémů. Pro třídu úloh s pravděpodobnostním omezením zde uvádíme originální dekompoziční metodu, kterou jsme nazvali “Pool & Discard algoritmus”. Užitečnost popsaných dekompozičních metod je ukázána na několika příkladech a inženýrských aplikacích.
16

State estimation and trajectory planning using box particle kernels / Estimation d'état et planification de trajectoire par mixtures de noyaux bornés

Merlinge, Nicolas 29 October 2018 (has links)
L'autonomie d'un engin aérospatial requière de disposer d'une boucle de navigation-guidage-pilotage efficace et sûre. Cette boucle intègre des filtres estimateurs et des lois de commande qui doivent dans certains cas s'accommoder de non-linéarités sévères et être capables d'exploiter des mesures ambiguës. De nombreuses approches ont été développées à cet effet et parmi celles-ci, les approches particulaires présentent l'avantage de pouvoir traiter de façon unifiée des problèmes dans lesquels les incertitudes d’évolution du système et d’observation peuvent être soumises à des lois statistiques quelconques. Cependant, ces approches ne sont pas exemptes de défauts dont le plus important est celui du coût de calcul élevé. D'autre part, dans certains cas, ces méthodes ne permettent pas non plus de converger vers une solution acceptable. Des adaptations récentes de ces approches, combinant les avantages du particulaire tel que la possibilité d'extraire la recherche d'une solution d'un domaine local de description et la robustesse des approches ensemblistes, ont été à l'origine du travail présenté dans cette thèse.Cette thèse présente le développement d’un algorithme d’estimation d’état, nommé le Box Regularised Particle Filter (BRPF), ainsi qu’un algorithme de commande, le Box Particle Control (BPC). Ces algorithmes se basent tous deux sur l’utilisation de mixtures de noyaux bornés par des boites (i.e., des vecteurs d’intervalles) pour décrire l’état du système sous la forme d’une densité de probabilité multimodale. Cette modélisation permet un meilleur recouvrement de l'espace d'état et apporte une meilleure cohérence entre la prédite et la vraisemblance. L’hypothèse est faite que les incertitudes incriminées sont bornées. L'exemple d'application choisi est la navigation par corrélation de terrain qui constitue une application exigeante en termes d'estimation d'état.Pour traiter des problèmes d’estimation ambiguë, c’est-à-dire lorsqu’une valeur de mesure peut correspondre à plusieurs valeurs possibles de l’état, le Box Regularised Particle Filter (BRPF) est introduit. Le BRPF est une évolution de l’algorithme de Box Particle Filter (BPF) et est doté d’une étape de ré-échantillonnage garantie et d’une stratégie de lissage par noyau (Kernel Regularisation). Le BRPF assure théoriquement une meilleure estimation que le BPF en termes de Mean Integrated Square Error (MISE). L’algorithme permet une réduction significative du coût de calcul par rapport aux approches précédentes (BPF, PF). Le BRPF est également étudié dans le cadre d’une intégration dans des architectures fédérées et distribuées, ce qui démontre son efficacité dans des cas multi-capteurs et multi-agents.Un autre aspect de la boucle de navigation–guidage-pilotage est le guidage qui nécessite de planifier la future trajectoire du système. Pour tenir compte de l'incertitude sur l'état et des contraintes potentielles de façon versatile, une approche nommé Box Particle Control (BPC) est introduite. Comme pour le BRPF, le BPC se base sur des mixtures de noyaux bornés par des boites et consiste en la propagation de la densité d’état sur une trajectoire jusqu’à un certain horizon de prédiction. Ceci permet d’estimer la probabilité de satisfaire les contraintes d’état au cours de la trajectoire et de déterminer la séquence de futures commandes qui maintient cette probabilité au-delà d’un certain seuil, tout en minimisant un coût. Le BPC permet de réduire significativement la charge de calcul. / State estimation and trajectory planning are two crucial functions for autonomous systems, and in particular for aerospace vehicles.Particle filters and sample-based trajectory planning have been widely considered to tackle non-linearities and non-Gaussian uncertainties.However, these approaches may produce erratic results due to the sampled approximation of the state density.In addition, they have a high computational cost which limits their practical interest.This thesis investigates the use of box kernel mixtures to describe multimodal probability density functions.A box kernel mixture is a weighted sum of basic functions (e.g., uniform kernels) that integrate to unity and whose supports are bounded by boxes, i.e., vectors of intervals.This modelling yields a more extensive description of the state density while requiring a lower computational load.New algorithms are developed, based on a derivation of the Box Particle Filter (BPF) for state estimation, and of a particle based chance constrained optimisation (Particle Control) for trajectory planning under uncertainty.In order to tackle ambiguous state estimation problems, a Box Regularised Particle Filter (BRPF) is introduced.The BRPF consists of an improved BPF with a guaranteed resampling step and a smoothing strategy based on kernel regularisation.The proposed strategy is theoretically proved to outperform the original BPF in terms of Mean Integrated Square Error (MISE), and empirically shown to reduce the Root Mean Square Error (RMSE) of estimation.BRPF reduces the computation load in a significant way and is robust to measurement ambiguity.BRPF is also integrated to federated and distributed architectures to demonstrate its efficiency in multi-sensors and multi-agents systems.In order to tackle constrained trajectory planning under non-Gaussian uncertainty, a Box Particle Control (BPC) is introduced.BPC relies on an interval bounded kernel mixture state density description, and consists of propagating the state density along a state trajectory at a given horizon.It yields a more accurate description of the state uncertainty than previous particle based algorithms.A chance constrained optimisation is performed, which consists of finding the sequence of future control inputs that minimises a cost function while ensuring that the probability of constraint violation (failure probability) remains below a given threshold.For similar performance, BPC yields a significant computation load reduction with respect to previous approaches.
17

Stochastic Optimization for Integrated Energy System with Reliability Improvement Using Decomposition Algorithm

Huang, Yuping 01 January 2014 (has links)
As energy demands increase and energy resources change, the traditional energy system has been upgraded and reconstructed for human society development and sustainability. Considerable studies have been conducted in energy expansion planning and electricity generation operations by mainly considering the integration of traditional fossil fuel generation with renewable generation. Because the energy market is full of uncertainty, we realize that these uncertainties have continuously challenged market design and operations, even a national energy policy. In fact, only a few considerations were given to the optimization of energy expansion and generation taking into account the variability and uncertainty of energy supply and demand in energy markets. This usually causes an energy system unreliable to cope with unexpected changes, such as a surge in fuel price, a sudden drop of demand, or a large renewable supply fluctuation. Thus, for an overall energy system, optimizing a long-term expansion planning and market operation in a stochastic environment are crucial to improve the system's reliability and robustness. As little consideration was paid to imposing risk measure on the power management system, this dissertation discusses applying risk-constrained stochastic programming to improve the efficiency, reliability and economics of energy expansion and electric power generation, respectively. Considering the supply-demand uncertainties affecting the energy system stability, three different optimization strategies are proposed to enhance the overall reliability and sustainability of an energy system. The first strategy is to optimize the regional energy expansion planning which focuses on capacity expansion of natural gas system, power generation system and renewable energy system, in addition to transmission network. With strong support of NG and electric facilities, the second strategy provides an optimal day-ahead scheduling for electric power generation system incorporating with non-generation resources, i.e. demand response and energy storage. Because of risk aversion, this generation scheduling enables a power system qualified with higher reliability and promotes non-generation resources in smart grid. To take advantage of power generation sources, the third strategy strengthens the change of the traditional energy reserve requirements to risk constraints but ensuring the same level of systems reliability In this way we can maximize the use of existing resources to accommodate internal or/and external changes in a power system. All problems are formulated by stochastic mixed integer programming, particularly considering the uncertainties from fuel price, renewable energy output and electricity demand over time. Taking the benefit of models structure, new decomposition strategies are proposed to decompose the stochastic unit commitment problems which are then solved by an enhanced Benders Decomposition algorithm. Compared to the classic Benders Decomposition, this proposed solution approach is able to increase convergence speed and thus reduce 25% of computation times on the same cases.
18

Learning Algorithms Using Chance-Constrained Programs

Jagarlapudi, Saketha Nath 07 1900 (has links)
This thesis explores Chance-Constrained Programming (CCP) in the context of learning. It is shown that chance-constraint approaches lead to improved algorithms for three important learning problems — classification with specified error rates, large dataset classification and Ordinal Regression (OR). Using moments of training data, the CCPs are posed as Second Order Cone Programs (SOCPs). Novel iterative algorithms for solving the resulting SOCPs are also derived. Borrowing ideas from robust optimization theory, the proposed formulations are made robust to moment estimation errors. A maximum margin classifier with specified false positive and false negative rates is derived. The key idea is to employ chance-constraints for each class which imply that the actual misclassification rates do not exceed the specified. The formulation is applied to the case of biased classification. The problems of large dataset classification and ordinal regression are addressed by deriving formulations which employ chance-constraints for clusters in training data rather than constraints for each data point. Since the number of clusters can be substantially smaller than the number of data points, the resulting formulation size and number of inequalities are very small. Hence the formulations scale well to large datasets. The scalable classification and OR formulations are extended to feature spaces and the kernelized duals turn out to be instances of SOCPs with a single cone constraint. Exploiting this speciality, fast iterative solvers which outperform generic SOCP solvers, are proposed. Compared to state-of-the-art learners, the proposed algorithms achieve a speed up as high as 10000 times, when the specialized SOCP solvers are employed. The proposed formulations involve second order moments of data and hence are susceptible to moment estimation errors. A generic way of making the formulations robust to such estimation errors is illustrated. Two novel confidence sets for moments are derived and it is shown that when either of the confidence sets are employed, the robust formulations also yield SOCPs.
19

Power-Aware Protocols for Wireless Sensor Networks / Conception et analyse de protocoles, pour les réseaux de capteurs sans fil, prenant en compte la consommation d'énergie

Xu, Chuan 15 December 2017 (has links)
Ce manuscrit contient d'abord l'étude d'une extension du modèle des protocoles de populations, qui représentent des réseaux de capteurs asynchrones, passivement mobiles, limités en ressources et anonymes. Pour la première fois (à notre connaissance), un modèle formel de consommation d'énergie est proposé pour les protocoles de populations. A titre d'application, nous étudions à la complexité en énergie (dans le pire des cas et en moyenne) pour le problème de collecte de données. Deux protocoles prenant en compte la consommation d'énergie sont proposés. Le premier est déterministe et le second randomisé. Pour déterminer les valeurs optimales des paramètres, nous faisons appel aux techniques d'optimisation. Nous appliquons aussi ces techniques dans un cadre différent, celui des réseaux de capteurs corporels (WBAN). Une formulation de flux est proposée pour acheminer de manière optimale les paquets de données en minimisant la pire consommation d'énergie. Une procédure de recherche à voisinage variable est développée et les résultats numériques montrent son efficacité. Enfin, nous considérons le problème d'optimisation avec des paramètres aléatoires. Précisément, nous étudions un modèle semi-défini positif sous contrainte en probabilité. Un nouvel algorithme basé sur la simulation est proposé et testé sur un problème réel de théorie du contrôle. Nous montrons que notre méthode permet de trouver une solution moins conservatrice que d'autres approches en un temps de calcul raisonnable. / In this thesis, we propose a formal energy model which allows an analytical study of energy consumption, for the first time in the context of population protocols. Population protocols model one special kind of sensor networks where anonymous and uniformly bounded memory sensors move unpredictably and communicate in pairs. To illustrate the power and the usefulness of the proposed energy model, we present formal analyses on time and energy, for the worst and the average cases, for accomplishing the fundamental task of data collection. Two power-aware population protocols, (deterministic) EB-TTFM and (randomized) lazy-TTF, are proposed and studied for two different fairness conditions, respectively. Moreover, to obtain the best parameters in lazy-TTF, we adopt optimization techniques and evaluate the resulting performance by experiments. Then, we continue the study on optimization for the power-aware data collection problem in wireless body area networks. A minmax multi-commodity netflow formulation is proposed to optimally route data packets by minimizing the worst power consumption. Then, a variable neighborhood search approach is developed and the numerical results show its efficiency. At last, a stochastic optimization model, namely the chance constrained semidefinite programs, is considered for the realistic decision making problems with random parameters. A novel simulation-based algorithm is proposed with experiments on a real control theory problem. We show that our method allows a less conservative solution, than other approaches, within reasonable time.
20

Approximations in Stochastic Optimization and Their Applications / Approximations in Stochastic Optimization and Their Applications

Mrázková, Eva January 2010 (has links)
Mnoho inženýrských úloh vede na optimalizační modely s~omezeními ve tvaru obyčejných (ODR) nebo parciálních (PDR) diferenciálních rovnic, přičemž jsou v praxi často některé parametry neurčité. V práci jsou uvažovány tři inženýrské problémy týkající se optimalizace vibrací a optimálního návrhu rozměrů nosníku. Neurčitost je v nich zahrnuta ve formě náhodného zatížení nebo náhodného Youngova modulu. Je zde ukázáno, že dvoustupňové stochastické programování nabízí slibný přístup k řešení úloh daného typu. Odpovídající matematické modely, zahrnující ODR nebo PDR omezení, neurčité parametry a více kritérií, vedou na (vícekriteriální) stochastické nelineární optimalizační modely. Dále je dokázáno, pro jaký typ úloh je nutné použít stochastické programování (EO reformulace), a kdy naopak stačí řešit jednodušší deterministickou úlohu (EV reformulace), což má v praxi význam z hlediska výpočetní náročnosti. Jsou navržena výpočetní schémata zahrnující diskretizační metody pro náhodné proměnné a ODR nebo PDR omezení. Matematické modely odvozené pomocí těchto aproximací jsou implementovány a řešeny v softwaru GAMS. Kvalita řešení je určena na základě intervalových odhadů "optimality gapu" spočtených pomocí metody Monte Carlo. Parametrická analýza vícekriteriálního modelu vede na výpočet "efficient frontier". Jsou studovány možnosti aproximace modelu zahrnujícího pravděpodobnostní členy související se spolehlivostí pomocí smíšeného celočíselného nelineárního programování a reformulace pomocí penalizační funkce. Dále je vzhledem k budoucím možnostem paralelních výpočtů rozsáhlých inženýrských úloh implementován a testován PHA algoritmus. Výsledky ukazují, že lze tento algoritmus použít, i když nejsou splněny matematické podmínky zaručující konvergenci. Na závěr je pro deterministickou verzi jedné z úloh porovnána metoda konečných diferencí s metodou konečných prvků za použití softwarů GAMS a ANSYS se zcela srovnatelnými výsledky.

Page generated in 0.1104 seconds