• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 12
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 56
  • 56
  • 20
  • 15
  • 12
  • 12
  • 10
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Estudo de tecnicas de controle H-infinito para estruturas flexiveis com intercentezas / A study on H-inifinity control techniques for uncertain flexible structures

Mazoni, Alysson Fernandes 22 February 2008 (has links)
Orientador: Alberto Luiz Serpa / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-11T04:04:36Z (GMT). No. of bitstreams: 1 Mazoni_AlyssonFernandes_M.pdf: 2883879 bytes, checksum: a514d74c83605535d0c2eaf4a060356f (MD5) Previous issue date: 2008 / Resumo: Esta dissertação aborda técnicas modernas de controle robusto H8 para sistemas dinâmicos lineares. Com isso pretende-se dizer que são usados como ferramentas matemáticas os resultados da teoria de controle de sistemas lineares para o caso com incertezas de vários tipos admitidas sobre o modelo. Os modelos são primariamente estruturas flexíveis e os métodos de projeto são implementados usando exclusivamente a solução de problemas sujeitos a desigualdades matriciais lineares. São abordadas as incertezas paramétrica, dinâmica e politópica com o objetivo de apresentar métodos matemáticos de projeto de controladores para os sistemas incertos. Para o caso de incerteza dinâmica, apresenta-se a técnica de filtros de ponderação. Em contraposição a essa abordagem, os resultados recentes da literatura sobre o lema generalizado de Kalman-Yakubovi?c-Popov e o H8 restrito na freqüência também são usados como métodos de controle independentes de filtros de ponderação. Os métodos são comparados usando modelos de simulação e experimentos no âmbito de estruturas flexíveis. Palavras-chave: Teoria dos sistemas dinâmicos, Programação Convexa, Sistema de controle por realimentação / Abstract: This dissertation deals with modern techniques from the Robust H8 Control of Linear Dynamic Systems. By this it is meant that the results from linear control systems theory are used as mathemathical tools when considering several kinds of uncertainty on the models. These models are mostly of flexible structures and the design methods are implemented using solely the solution of problems subjected to linear matrix inequalities. The types of uncertainty approached are: parametric, dynamic and polytopic; this is done aiming to present mathematical design methods for the uncertain systems considered. When dealing with dynamic uncertainty, the weighting functions are introduced. In contrast with this approach, recent results from literature on the generalised Kalman-Yakubovi?c-Popov lemma and frequency restricted H8 are also used as control design methods whose application is independent of weighting functions. All methods are compared using simple simulation models and experiments with flexible structures . Keywords: Theory of dynamical systems, Convex programming, Feedback control systems / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
42

Controle H2/H ''Infinito' de estruturas flexiveis atraves de desigualdades matriciais lineares com alocação de polos / H2/H "Infinity' control of flexible strructures through linear matrix inequalities with pole placement

Lopes, Jean Cutrim 02 February 2005 (has links)
Orientador: Alberto Luiz Serpa / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-04T03:07:46Z (GMT). No. of bitstreams: 1 Lopes_JeanCutrim_M.pdf: 5625354 bytes, checksum: a90581a7277feed28b2e913c2e7ed085 (MD5) Previous issue date: 2005 / Resumo: o objetivo deste trabalho é aplicar o controle H2/H 'Infinito'usando desigualdades matriciais lineares com restrições de alocação de pólos em estruturas flexíveis. O problema de controle H2/H'Infinito' é uma técnica usada para a obtenção de controladores com as propriedades do controle norma H2, que trás desempenho ótimo, e do controle norma H 'Infinito' que proporciona desempenho dinâmico robusto. As desigualdades matriciais lineares permitem que a obtenção do controlador seja formulada como um problema de otimização convexa e com restrições adicionais tais como as referentes à alocação de pólos. O problema de alocação de pólos é importante para ajustar o comportamento dinâmico da planta controlada no que se refere a especificações em termos da velocidade de resposta e do amortecimento, por exemplo. O modelo empregado para o estudo foi uma viga sujeita a distúrbios com o controlador atuando de forma não colocada. As matrizes de estado empregadas ao estudo de controle foram determinadas através das matrizes obtidas pelo método dos elementos finitos, considerando o modelo de viga de Euler-Bemoulli. Os resultados mostraram que o uso da alocação dos pólos melhora o desempenho do controlador H2/H'Infinito'. Para a implementação computacional foi utilizado o aplicativo Matlab / Abstract: The objective ofthis work is to apply the H2/H'Infinity' control technique using linear matrix inequalities and pole placement constraints to the flexible structures control problem. The H2/H'Infinity' control is a technique to design a controller with mixed features of the H2 and H'Infinity' control formulations, such as, optimal dynamical performance and robust performance. The Linear Matrix Inequalities allow to formulate the problem as a convex optimization problem, and additional constraints can be included such as the pole placement. The pole placement requirement comes ftom the necessity of adjusting the transient response of the plant and ensuring specific behavior in terms of speed and damping responses. The mathematical model used for this study is related to a flexible beam, with an applied disturbance and an actuator in different positions. The state-space matrices of the structure were obtained using the finite element method with the Euler-Bernoulli formulation of beams. The results show that the pole placement constraints can improve the performance of the controller H2/H'Infinity',The Matlab was used for the computational implementation / Mestrado / Projeto Mecanico e Mecanica dos Solidos / Mestre em Engenharia Mecânica
43

Learning through the lens of computation

Peng, Binghui January 2024 (has links)
One of the major mysteries in science is the remarkable success of machine learning. While its empirical achievements have captivated the field, our theoretical comprehension lags significantly behind. This thesis seeks for advancing our theoretical understanding of learning and intelligence from a computational perspective. By studying fundamental learning, optimization and decision making tasks, we aspire to shed lights on the impact of computation for artificial intelligence. The first part of this thesis concerns the space resource needed for learning. By studying the fundamental role of memory for continual learning, online decision making and convex optimization, we find both continual learning and efficient convex optimization require a lot of memory; while for decision making, exponential savings are possible. More concretely, (1) We prove there is no memory deduction in continual learning, unless the continual learner takes multiple passes over the sequence of environments; (2) We prove in order to optimize a convex function in the most iteration efficient way, an algorithm must use a quadratic amount of space; (3) We show polylogarithmic space is sufficient for making near optimal decisions in an oblivious adversary environment; in a sharp contrast, a quadratic saving is both sufficient and necessary to achieve vanishing regret in an adaptive adversarial environment. The second part of this thesis uses learning as a tool, and resolves a series of long-standing open questions in algorithmic game theory. By giving an exponential faster no-swap-regret learning algorithm, we obtain algorithms that achieve near-optimal computation/communication/iteration complexity for computing an approximate correlated equilibrium in a normal-form game, and we give the first polynomial-time algorithm for computing approximate normal-form correlated equilibria in imperfect information games (including Bayesian and extensive-form games).
44

Métodos de penalidade e barreira para programação convexa semidefinida / Penalty / barrier methods for convex semidefinite programming

Santos, Antonio Carlos dos 29 May 2009 (has links)
Este trabalho insere-se no contexto de métodos de multiplicadores para a resolução de problemas de programação convexa semidefinida e a análise de suas propriedades através do método proximal aplicado sobre o problema dual. Nosso foco será uma subclasse de problemas de programação convexa semidefinida com restrições afins, para a qual estudaremos relações de dualidade e condições para a existência de soluções dos problemas primal e dual. Em seguida, analisaremos dois métodos de multiplicadores para resolver essa classe de problemas e que são extensões de métodos conhecidos para programação não-linear. O primeiro, proposto por Doljansky e Teboulle, aborda um método de ponto proximal interior entrópico e sua conexão com um método de multiplicadores exponenciais. O segundo, apresentado por Mosheyev e Zibulevsky, estende para a classe de problemas de nosso interesse um método de lagrangianos aumentados suaves proposto por Ben-Tal e Zibulevsky. Por fim, apresentamos os resultados de testes numéricos feitos com o algoritmo proposto por Mosheyev e Zibulevsky, analisando diferentes escolhas de parâmetros, o aproveitamento do padrão de esparsidade das matrizes do problema e critérios para a resolução aproximada dos subproblemas irrestritos que devem ser resolvidos a cada iteração desse algoritmo de lagrangianos aumentados. / This work deals with multiplier methods to solve semidefinite convex programming problems and the analysis of their proprieties based on the proximal point method applied on the dual problem. We focus on a subclass of semidefinite programming problems with affine constraints, for which we study duality relations an conditions for the existence of solutions of the primal and dual problems. Afterwards, we analyze two multiplier methods to solve this class of problems which are extensions of known methods in nonlinear programming. The first one, introduced by Doljansky e Teboulle, approaches an entropic interior proximal algorithm and their relationship with an exponential multiplier method. The second one, presented by Mosheyev e Zibulevsky, extends a smooth augmented Lagrangian method proposed by Ben-Tal and Zibulevsky for the problems of our interest. Finally, we present the results of numerical experiments for the algorithm proposed by Mosheyev e Zibulevsky, analyzing some choices of parameters, the sparsity patterns of matrices of the problem and criteria to accept approximate solutions of the unconstrained subproblems that must be solved at each iteration of the augmented Lagrangian method.
45

Some Topics in Roc Curves Analysis

Huang, Xin 07 May 2011 (has links)
The receiver operating characteristic (ROC) curves is a popular tool for evaluating continuous diagnostic tests. The traditional definition of ROC curves incorporates implicitly the idea of "hard" thresholding, which also results in the empirical curves being step functions. The first topic is to introduce a novel definition of soft ROC curves, which incorporates the idea of "soft" thresholding. The softness of a soft ROC curve is controlled by a regularization parameter that can be selected suitably by a cross-validation procedure. A byproduct of the soft ROC curves is that the corresponding empirical curves are smooth. The second topic is on combination of several diagnostic tests to achieve better diagnostic accuracy. We consider the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a non-parametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. However, the above best-subset variable selection method is not practical when the number of diagnostic tests is large. The third topic is to further develop a LASSO-type procedure for variable selection. To solve the non-convex maximization problem in the proposed procedure, an efficient algorithm is developed based on soft ROC curves, difference convex programming, and coordinate descent algorithm.
46

Variação do controle como fonte de incerteza / Control variation as a source of uncertainty

Calmon, Andre du Pin 14 August 2018 (has links)
Orientador: João Bosco Ribeiro do Val / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-14T00:07:24Z (GMT). No. of bitstreams: 1 Calmon_AndreduPin_M.pdf: 862345 bytes, checksum: 122780715dca28ac7fa3199aa0586e7c (MD5) Previous issue date: 2009 / Resumo: Este trabalho apresenta a caracterização teórica e a estratégia de controle para sistemas estocásticos em tempo discreto onde a variação da ação de controle aumenta a incerteza sobre o estado (sistemas VCAI). Este tipo de sistema possui várias aplicações práticas, como em problemas de política monetária, medicina e, de forma geral, em problemas onde um modelo dinâmico completo do sistema é complexo demais para ser conhecido. Utilizando ferramentas da análise de funções não suaves, mostra-se para um sistema VCAI multidimensional que a convexidade é uma invariante da função valor da Programação Dinâmica quando o custo por estágio é convexo. Esta estratégia indica a existência de uma região no espaço de estados onde a ação ótima de controle é de não variação (denominada região de não-variação), estando de acordo com a natureza cautelosa do controle de sistemas subdeterminados. Adicionalmente, estudou-se algoritmos para a obtenção da política ótima de controle para sistemas VCAI, com ênfase no caso mono-entrada avaliado através de uma função custo quadrática. Finalmente, os resultados obtidos foram aplicados no problema da condução da política monetária pelo Banco Central. / Abstract: This dissertation presents a theoretical framework and the control strategy for discrete-time stochastic systems for which the control variations increase state uncertainty (CVIU systems). This type of system model can be useful in many practical situations, such as in monetary policy problems, medicine and biology, and, in general, in problems for which a complete dynamic model is too complex to be feasible. The optimal control strategy for a multidimensional CVIU system associated with a convex cost functional is devised using dynamic programming and tools from nonsmooth analysis. Furthermore, this strategy points to a region in the state space in which the optimal action is of no variation (the region of no variation), as expected from the cautionary nature of controlling underdetermined systems. Numerical strategies for obtaining the optimal policy in CVIU systems were developed, with focus on the single-input input case evaluated through a quadratic cost functional. These results are illustrated through a numerical example in economics. / Mestrado / Automação / Mestre em Engenharia Elétrica
47

Métodos de penalidade e barreira para programação convexa semidefinida / Penalty / barrier methods for convex semidefinite programming

Antonio Carlos dos Santos 29 May 2009 (has links)
Este trabalho insere-se no contexto de métodos de multiplicadores para a resolução de problemas de programação convexa semidefinida e a análise de suas propriedades através do método proximal aplicado sobre o problema dual. Nosso foco será uma subclasse de problemas de programação convexa semidefinida com restrições afins, para a qual estudaremos relações de dualidade e condições para a existência de soluções dos problemas primal e dual. Em seguida, analisaremos dois métodos de multiplicadores para resolver essa classe de problemas e que são extensões de métodos conhecidos para programação não-linear. O primeiro, proposto por Doljansky e Teboulle, aborda um método de ponto proximal interior entrópico e sua conexão com um método de multiplicadores exponenciais. O segundo, apresentado por Mosheyev e Zibulevsky, estende para a classe de problemas de nosso interesse um método de lagrangianos aumentados suaves proposto por Ben-Tal e Zibulevsky. Por fim, apresentamos os resultados de testes numéricos feitos com o algoritmo proposto por Mosheyev e Zibulevsky, analisando diferentes escolhas de parâmetros, o aproveitamento do padrão de esparsidade das matrizes do problema e critérios para a resolução aproximada dos subproblemas irrestritos que devem ser resolvidos a cada iteração desse algoritmo de lagrangianos aumentados. / This work deals with multiplier methods to solve semidefinite convex programming problems and the analysis of their proprieties based on the proximal point method applied on the dual problem. We focus on a subclass of semidefinite programming problems with affine constraints, for which we study duality relations an conditions for the existence of solutions of the primal and dual problems. Afterwards, we analyze two multiplier methods to solve this class of problems which are extensions of known methods in nonlinear programming. The first one, introduced by Doljansky e Teboulle, approaches an entropic interior proximal algorithm and their relationship with an exponential multiplier method. The second one, presented by Mosheyev e Zibulevsky, extends a smooth augmented Lagrangian method proposed by Ben-Tal and Zibulevsky for the problems of our interest. Finally, we present the results of numerical experiments for the algorithm proposed by Mosheyev e Zibulevsky, analyzing some choices of parameters, the sparsity patterns of matrices of the problem and criteria to accept approximate solutions of the unconstrained subproblems that must be solved at each iteration of the augmented Lagrangian method.
48

Topics in Network Utility Maximization : Interior Point and Finite-step Methods

Akhil, P T January 2017 (has links) (PDF)
Network utility maximization has emerged as a powerful tool in studying flow control, resource allocation and other cross-layer optimization problems. In this work, we study a flow control problem in the optimization framework. The objective is to maximize the sum utility of the users subject to the flow constraints of the network. The utility maximization is solved in a distributed setting; the network operator does not know the user utility functions and the users know neither the rate choices of other users nor the flow constraints of the network. We build upon a popular decomposition technique proposed by Kelly [Eur. Trans. Telecommun., 8(1), 1997] to solve the utility maximization problem in the aforementioned distributed setting. The technique decomposes the utility maximization problem into a user problem, solved by each user and a network problem solved by the network. We propose an iterative algorithm based on this decomposition technique. In each iteration, the users communicate to the network their willingness to pay for the network resources. The network allocates rates in a proportionally fair manner based on the prices communicated by the users. The new feature of the proposed algorithm is that the rates allocated by the network remains feasible at all times. We show that the iterates put out by the algorithm asymptotically tracks a differential inclusion. We also show that the solution to the differential inclusion converges to the system optimal point via Lyapunov theory. We use a popular benchmark algorithm due to Kelly et al. [J. of the Oper. Res. Soc., 49(3), 1998] that involves fast user updates coupled with slow network updates in the form of additive increase and multiplicative decrease of the user flows. The proposed algorithm may be viewed as one with fast user update and fast network update that keeps the iterates feasible at all times. Simulations suggest that our proposed algorithm converges faster than the aforementioned benchmark algorithm. When the flows originate or terminate at a single node, the network problem is the maximization of a so-called d-separable objective function over the bases of a polymatroid. The solution is the lexicographically optimal base of the polymatroid. We map the problem of finding the lexicographically optimal base of a polymatroid to the geometrical problem of finding the concave cover of a set of points on a two-dimensional plane. We also describe an algorithm that finds the concave cover in linear time. Next, we consider the minimization of a more general objective function, i.e., a separable convex function, over the bases of a polymatroid with a special structure. We propose a novel decomposition algorithm and show the proof of correctness and optimality of the algorithm via the theory of polymatroids. Further, motivated by the need to handle piece-wise linear concave utility functions, we extend the decomposition algorithm to handle the case when the separable convex functions are not continuously differentiable or not strictly convex. We then provide a proof of its correctness and optimality.
49

Adaptation de l’algorithmique aux architectures parallèles / Adapting algorithms to parallel architectures

Borghi, Alexandre 10 October 2011 (has links)
Dans cette thèse, nous nous intéressons à l'adaptation de l'algorithmique aux architectures parallèles. Les plateformes hautes performances actuelles disposent de plusieurs niveaux de parallélisme et requièrent un travail considérable pour en tirer parti. Les superordinateurs possèdent de plus en plus d'unités de calcul et sont de plus en plus hétérogènes et hiérarchiques, ce qui complexifie d'autant plus leur utilisation.Nous nous sommes intéressés ici à plusieurs aspects permettant de tirer parti des architectures parallèles modernes. Tout au long de cette thèse, plusieurs problèmes de natures différentes sont abordés, de manière plus théorique ou plus pratique selon le cadre et l'échelle des plateformes parallèles envisagées.Nous avons travaillé sur la modélisation de problèmes dans le but d'adapter leur formulation à des solveurs existants ou des méthodes de résolution existantes, en particulier dans le cadre du problème de la factorisation en nombres premiers modélisé et résolu à l'aide d'outils de programmation linéaire en nombres entiers.La contribution la plus importante de cette thèse correspond à la conception d'algorithmes pensés dès le départ pour être performants sur les architectures modernes (processeurs multi-coeurs, Cell, GPU). Deux algorithmes pour résoudre le problème du compressive sensing ont été conçus dans ce cadre : le premier repose sur la programmation linéaire et permet d'obtenir une solution exacte, alors que le second utilise des méthodes de programmation convexe et permet d'obtenir une solution approchée.Nous avons aussi utilisé une bibliothèque de parallélisation de haut niveau utilisant le modèle BSP dans le cadre de la vérification de modèles pour implémenter de manière parallèle un algorithme existant. A partir d'une unique implémentation, cet outil rend possible l'utilisation de l'algorithme sur des plateformes disposant de différents niveaux de parallélisme, tout en ayant des performances de premier ordre sur chacune d'entre elles. En l'occurrence, la plateforme de plus grande échelle considérée ici est le cluster de machines multiprocesseurs multi-coeurs. De plus, dans le cadre très particulier du processeur Cell, une implémentation a été réécrite à partir de zéro pour tirer parti de celle-ci. / In this thesis, we are interested in adapting algorithms to parallel architectures. Current high performance platforms have several levels of parallelism and require a significant amount of work to make the most of them. Supercomputers possess more and more computational units and are more and more heterogeneous and hierarchical, which make their use very difficult.We take an interest in several aspects which enable to benefit from modern parallel architectures. Throughout this thesis, several problems with different natures are tackled, more theoretically or more practically according to the context and the scale of the considered parallel platforms.We have worked on modeling problems in order to adapt their formulation to existing solvers or resolution methods, in particular in the context of integer factorization problem modeled and solved with integer programming tools.The main contribution of this thesis corresponds to the design of algorithms thought from the beginning to be efficient when running on modern architectures (multi-core processors, Cell, GPU). Two algorithms which solve the compressive sensing problem have been designed in this context: the first one uses linear programming and enables to find an exact solution, whereas the second one uses convex programming and enables to find an approximate solution.We have also used a high-level parallelization library which uses the BSP model in the context of model checking to implement in parallel an existing algorithm. From a unique implementation, this tool enables the use of the algorithm on platforms with different levels of parallelism, while obtaining cutting edge performance for each of them. In our case, the largest-scale platform that we considered is the cluster of multi-core multiprocessors. More, in the context of the very particular Cell processor, an implementation has been written from scratch to take benefit from it.
50

Optimisation stochastique avec contraintes en probabilités et applications / Chance constrained problem and its applications

Peng, Shen 17 June 2019 (has links)
L'incertitude est une propriété naturelle des systèmes complexes. Les paramètres de certains modèles peuvent être imprécis; la présence de perturbations aléatoires est une source majeure d'incertitude pouvant avoir un impact important sur les performances du système. Dans cette thèse, nous étudierons les problèmes d’optimisation avec contraintes en probabilités dans les cas suivants : Tout d’abord, nous passons en revue les principaux résultats relatifs aux contraintes en probabilités selon trois perspectives: les problèmes liés à la convexité, les reformulations et les approximations de ces contraintes, et le cas de l’optimisation distributionnellement robuste. Pour les problèmes d’optimisation géométriques, nous étudions les programmes avec contraintes en probabilités jointes. A l’aide d’hypothèses d’indépendance des variables aléatoires elliptiquement distribuées, nous déduisons une reformulation des programmes avec contraintes géométriques rectangulaires jointes. Comme la reformulation n’est pas convexe, nous proposons de nouvelles approximations convexes basées sur la transformation des variables ainsi que des méthodes d’approximation linéaire par morceaux. Nos résultats numériques montrent que nos approximations sont asymptotiquement serrées. Lorsque les distributions de probabilité ne sont pas connues à l’avance, le calcul des bornes peut être très utile. Par conséquent, nous développons quatre bornes supérieures pour les contraintes probabilistes individuelles, et jointes dont les vecteur-lignes de la matrice des contraintes sont indépendantes. Sur la base des inégalités de Chebyshev, Chernoff, Bernstein et de Hoeffding, nous proposons des approximations déterministes. Des conditions suffisantes de convexité. Pour réduire la complexité des calculs, nous reformulons les approximations sous forme de problèmes d'optimisation convexes solvables basés sur des approximations linéaires et tangentielles par morceaux. Enfin, des expériences numériques sont menées afin de montrer la qualité des approximations étudiées sur des données aléatoires. Dans certains systèmes complexes, la distribution des paramètres aléatoires n’est que partiellement connue. Pour traiter les incertitudes dans ces cas, nous proposons un ensemble d'incertitude basé sur des données obtenues à partir de distributions mixtes. L'ensemble d'incertitude est construit dans la perspective d'estimer simultanément des moments d'ordre supérieur. Ensuite, nous proposons une reformulation du problème robuste avec contraintes en probabilités en utilisant des données issues d’échantillonnage. Comme la reformulation n’est pas convexe, nous proposons des approximations convexes serrées basées sur la méthode d’approximation linéaire par morceaux sous certaines conditions. Pour le cas général, nous proposons une approximation DC pour dériver une borne supérieure et une approximation convexe relaxée pour dériver une borne inférieure pour la valeur de la solution optimale du problème initial. Enfin, des expériences numériques sont effectuées pour montrer que les approximations proposées sont efficaces. Nous considérons enfin un jeu stochastique à n joueurs non-coopératif. Lorsque l'ensemble de stratégies de chaque joueur contient un ensemble de contraintes linéaires stochastiques, nous modélisons ces contraintes sous la forme de contraintes en probabilité jointes. Pour chaque joueur, nous formulons les contraintes en probabilité dont les variables aléatoires sont soit normalement distribuées, soit elliptiquement distribuées, soit encore définies dans le cadre de l’optimisation distributionnellement robuste. Sous certaines conditions, nous montrons l’existence d’un équilibre de Nash pour ces jeux stochastiques. / Chance constrained optimization is a natural and widely used approaches to provide profitable and reliable decisions under uncertainty. And the topics around the theory and applications of chance constrained problems are interesting and attractive. However, there are still some important issues requiring non-trivial efforts to solve. In view of this, we will systematically investigate chance constrained problems from the following perspectives. As the basis for chance constrained problems, we first review some main research results about chance constraints in three perspectives: convexity of chance constraints, reformulations and approximations for chance constraints and distributionally robust chance constraints. For stochastic geometric programs, we formulate consider a joint rectangular geometric chance constrained program. With elliptically distributed and pairwise independent assumptions for stochastic parameters, we derive a reformulation of the joint rectangular geometric chance constrained programs. As the reformulation is not convex, we propose new convex approximations based on the variable transformation together with piecewise linear approximation methods. Our numerical results show that our approximations are asymptotically tight. When the probability distributions are not known in advance or the reformulation for chance constraints is hard to obtain, bounds on chance constraints can be very useful. Therefore, we develop four upper bounds for individual and joint chance constraints with independent matrix vector rows. Based on the one-side Chebyshev inequality, Chernoff inequality, Bernstein inequality and Hoeffding inequality, we propose deterministic approximations for chance constraints. In addition, various sufficient conditions under which the aforementioned approximations are convex and tractable are derived. To reduce further computational complexity, we reformulate the approximations as tractable convex optimization problems based on piecewise linear and tangent approximations. Finally, based on randomly generated data, numerical experiments are discussed in order to identify the tight deterministic approximations. In some complex systems, the distribution of the random parameters is only known partially. To deal with the complex uncertainties in terms of the distribution and sample data, we propose a data-driven mixture distribution based uncertainty set. The data-driven mixture distribution based uncertainty set is constructed from the perspective of simultaneously estimating higher order moments. Then, with the mixture distribution based uncertainty set, we derive a reformulation of the data-driven robust chance constrained problem. As the reformulation is not a convex program, we propose new and tight convex approximations based on the piecewise linear approximation method under certain conditions. For the general case, we propose a DC approximation to derive an upper bound and a relaxed convex approximation to derive a lower bound for the optimal value of the original problem, respectively. We also establish the theoretical foundation for these approximations. Finally, simulation experiments are carried out to show that the proposed approximations are practical and efficient. We consider a stochastic n-player non-cooperative game. When the strategy set of each player contains a set of stochastic linear constraints, we model the stochastic linear constraints of each player as a joint chance constraint. For each player, we assume that the row vectors of the matrix defining the stochastic constraints are pairwise independent. Then, we formulate the chance constraints with the viewpoints of normal distribution, elliptical distribution and distributionally robustness, respectively. Under certain conditions, we show the existence of a Nash equilibrium for the stochastic game.

Page generated in 0.132 seconds