• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1534
  • 363
  • 359
  • 195
  • 78
  • 48
  • 46
  • 39
  • 31
  • 26
  • 20
  • 18
  • 17
  • 13
  • 9
  • Tagged with
  • 3318
  • 1153
  • 438
  • 429
  • 327
  • 321
  • 306
  • 286
  • 269
  • 258
  • 236
  • 234
  • 218
  • 211
  • 205
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Problem of hedging of a portfolio with a unique rebalancing moment

Mironenko, Georgy January 2012 (has links)
The paper deals with the problem of finding an optimal one-time rebalancing strategy for the Bachelier model, and makes some remarks for the similar problem within Black-Scholes model. The problem is studied on finite time interval under mean-square criterion of optimality. The methods of the paper are based on the results for optimal stopping problem and standard mean-square criterion. The solution of the problem, considered in the paper, let us interpret how and - that is more important for us -when investor should rebalance the portfolio, if he wants to hedge it in the best way.
622

Conception de réflecteurs pour des applications photométriques / Geometric modeling of surfaces for applications photometric

André, Julien 12 March 2015 (has links)
Dans cette thèse, nous étudions le problème du réflecteur. Etant données une source lumineuse et une cible à éclairer avec une certaine distribution d'intensité, il s'agit de construire une surface réfléchissant la lumière issue de la source vers la cible avec la distribution d'intensité prescrite. Ce problème se pose dans de nombreux domaines tels que l'art ou l'architecture. Le domaine qui nous intéresse ici est le domaine automobile. En effet, cette thèse Cifre est réalisée en partenariat avec l'entreprise Optis qui développe des logiciels de simulation de lumière et de conception optique utilisés dans les processus de fabrication des phares de voiture. Les surfaces formant les réflecteurs des phares de voiture doivent répondre à un certain nombre de critères imposés par les fabricants ainsi que les autorités de contrôle nationales et internationales. Ces critères peuvent être objectifs comme par exemple l'encombrement du véhicule ou encore le respect des normes d'éclairage mais peuvent également être subjectifs comme l'aspect esthétique des surfaces. Notre objectif est de proposer des outils industrialisables permettant de résoudre le problème du réflecteur tout en prenant en compte ces critères. Dans un premier temps, nous nous intéresserons au cas de sources lumineuses ponctuelles. Nous reprenons les travaux d'Oliker, Glim, Cafarrelli et Wang qui montrent que le problème du réflecteur peut être formulé comme un problème de transport optimal. Cette formulation du problème est présentée et mise en œuvre dans un cas discret. Dans un second temps, nous cherchons à prendre en compte les critères imposés par les fabricants de phares de voitures. Nous nous sommes intéressés ici aux contraintes d'encombrement et d'esthétique. La solution choisie consiste à utiliser des surfaces de Bézier définies comme le graphe d'une certaine fonction paramétrée par un domaine du plan. Les surfaces de Bézier permettent d'obtenir des surfaces lisses et la paramétrisation par un domaine du plan permet de gérer l'encombrement et le style d'un réflecteur. Nous avons proposé une méthode heuristique itérative par point fixe pour obtenir ce type surface. Enfin, dans un dernier temps, nous prenons en compte des sources lumineuses non ponctuelles. L'approche proposée consiste à adapter itérativement les paramètres du réflecteur de façon à minimiser une distance entre intensité souhaitée et intensité réfléchie. Ceci nous a conduits à proposer une méthode d'évaluation rapide de l'intensité réfléchie par une surface. Les méthodes développées durant cette thèse ont fait l'objet d'une implémentation dans un cadre industriel en partenariat avec l'entreprise Optis. / The far-field reflector problem consists in building a surface that reflects light from a given source back into a target at infinity with a prescribed intensity distribution. This problem arises in many fields such as art or architecture. In this thesis, we are interested in applications to the car industry. Indeed, this thesis is conducted in partnership with the company Optis that develops lighting and optical simulation software used in the design of car headlights. Surfaces in car headlight reflectors must satisfy several constraints imposed by manufacturers as well as national and international regulatory authorities. These constraints can be objective such as space requirements or compliance with lighting legal standards but can also can be subjective such as the aesthetic aspects of surfaces. Our goal is to provide industrializable tools to solve the reflector problem while taking into account these constraints. First, we focus on the case of point light sources. We rely on the work of Oliker, Glim, Cafarrelli and Wang who show that the reflector problem can be formulated as an optimal transport problem. This formulation of the problem is presented and implemented in a discrete case. In a second step, we take into account some of the constraints imposed by car headlight manufacturers, such as the size and the style of the reflector. The chosen solution consists in using Bezier surfaces defined as the graph of a function parameterized over a planar domain. Bezier surfaces allow to obtain smooth surfaces and the parameterization over a planar domain allows to control the size and style of the reflector. To build the surface, we propose a heuristic based on a fixed-point algorithm. Finally, we take into account extended light sources. We present an approach that iteratively adapts the parameters of the reflector by minimizing the distance between the desired intensity and the reflected intensity. This led us to propose a method that efficiently evaluates the reflection of light on the surface. Methods developed in this thesis were implemented in an industrial setting at our partner company Optis.
623

Accuracies of Optimal Transmission Switching Heuristics Based on Exact and Approximate Power Flow Equations

Soroush, Milad 22 May 2013 (has links)
Optimal transmission switching (OTS) enables us to remove selected transmission lines from service as a cost reduction method. A mixed integer programming (MIP) model has been proposed to solve the OTS problem based on the direct current optimal power flow (DCOPF) approximation. Previous studies indicated computational issues regarding the OTS problem and the need for a more accurate model. In order to resolve computational issues, especially in large real systems, the MIP model has been followed by some heuristics to find good, near optimal, solutions in a reasonable time. The line removal recommendations based on DCOPF approximations may result in poor choices to remove from service. We assess the quality of line removal recommendations that rely on DCOPF-based heuristics, by estimating actual cost reduction with the exact alternating current optimal power flow (ACOPF) model, using the IEEE 118-bus test system. We also define an ACOPF-based line-ranking procedure and compare the quality of its recommendations to those of a previously published DCOPF-based procedure. For the 118-bus system, the DCOPF-based line ranking produces poor quality results, especially when demand and congestion are very high, while the ACOPF-based heuristic produces very good quality recommendations for line removals, at the expense of much longer computation times. There is a need for approximations to the ACOPF that are accurate enough to produce good results for OTS heuristics, but fast enough for practical use for OTS decisions.
624

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
625

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
626

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
627

Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Gilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
628

Stabilité et perturbations optimales globales d'écoulements compressibles pariétaux / Stability and global optimal perturbations of parietal compressible flows

Bugeat, Benjamin 12 December 2017 (has links)
Une méthode de calcul de forçage optimal a été employée afin d'analyser le caractère amplificateur sélectif de bruit d'écoulements compressibles pariétaux. Une telle approche inclut la prise en compte de croissances non-modales induites par la non-normalité des équations de Navier-Stokes linéarisées. La méthode numérique repose sur le calcul de la matrice résolvante globale et la résolution d'un problème aux valeurs propres associé à un problème d'optimisation. Les densités d'énergie des forçages et réponses optimaux calculés pour une couche limite supersonique ont pu être reliés à la courbe neutre expérimentale obtenue par Laufer et Vrebalovich, à condition de contraindre la localisation du forçage en amont de la branche inférieure. Par la suite, une étude paramétrique en nombre de Mach de la réceptivité 2D d'une interaction choc/couche limite laminaire a permis de caractériser le développement d'instabilités convectives de Kelvin-Helmholtz et Tollmien-Schlichting (TS) à haute fréquence. La réceptivité basse fréquence de ce système a été mise en relation avec la résonance d'un mode global stable. Par ailleurs, une extension de la méthode numérique 2D a été proposée pour le calcul de perturbations 3D. Son application au calcul du forçage optimal d'une couche limite à M=4.5 a permis de mettre en évidence la croissance non-modale 3D de streaks ainsi que le développement d'ondes TS obliques dont la croissance, en régime compressible, est favorisée par rapport à celle des ondes 2D. Cette étude a également permis d'observer la croissance du mode de Mack à plus haute fréquence. / Parietal compressible flows have been studied by means of optimal forcing computations in order to characterize the noise amplifier nature of these flows. This approach is able to take into account the non-modal growth of linear perturbations induced by the non-normality of the linearized Navier-Stokes equations. The numerical strategy is based on the computation of the global resolvent matrix and an eigenvalue problem stemming from an optimization problem. Optimal forcing and response energy densities of a supersonic boundary layer have been linked to the experimental neutral curve obtained by Laufer et Vrebalovich, provided that the forcing localization is constrained upstream from the lower branch. Afterwards, a parametric study with respect to the Mach number of the 2D receptivity of the laminar shock wave/boundary layer interaction flow has allowed to analyze the growth of Kelvin-Helmholtz and Tollmien-Schlichting instabilities (TS) occurring at high frequencies. At low frequencies, the receptivity of the system has been linked to the resonance of a stable global mode. Furthermore, the 2D numerical method has been extended to allow the computation of 3D perturbations. This approach has been applied to a supersonic boundary layer flow at M=4.5 in which the 3D non-modal growth of streaks has been identified, as well as the development of oblique TS waves, whose growth is larger than the one associated to 2D waves in compressible regime. This study has also allowed to detect the growth of Mack mode at higher frequencies.
629

Optimering och dimensionering av ett solcellssystem till ett flerbostadshus i Mellansverige : En beräknings- och simuleringsstudie

Forslund, John January 2018 (has links)
Solkraft kan täcka hela jordens energibehov många gånger utan att släppa ut växthusgaser eller andra giftiga ämnen vid drift och räknas därför till en hållbar och förnyelsebar energikälla. Solkraft är därför en lämplig kandidat till att ersätta dagens ej hållbara fossilbaserade energisystem. Priset för solceller har sjunkit mycket de senaste åren. Samtidigt som Sverige och EU har som mål att minska koldioxidutsläpp ges både skattereduktion för såld överskottsel från solkraft och ett investeringsstöd. Därför kan det eventuellt vara lönsamt att installera solceller i Sverige trots begränsad solinstrålning. Det krävs att återbetalningstiden är rimlig för att privatpersoner skall bestämma sig att investera i solceller. Miljövinster är inte alls motiverande för privatpersoner enligt undersökningar. Därför bör solcellsanläggningar optimera och dimensioneras för maximal ekonomisk lönsamhet för att öka chanserna att investeringen blir av. Det här arbetet undersöker hur ett optimalt solcellssystem ska se ut ur ett ekonomiskt perspektiv för en bostadsförening bestående av 25 lägenheter i Mellansverige under olika ekonomiska förutsättningar. Störst fokus ligger på att analysera hur lutningsvinkeln förändrar resultatet. Elproduktionen hos olika konfigurationer av solcellsanläggningar simulerades fram. Dessa resultat ställdes mot byggnadens elanvändning för att beräkna hur mycket el som används till för att spara inköpt el och hur mycket som säljs för att utifrån det beräkna lönsamheten. Mest el produceras vid lutningsvinkeln 40° vilket ger marginellt mer än 30° som taket lutar. Det visar sig att lutningsvinkeln kan justeras för att öka lönsamheten men det är endast ett fåtal procent som mest. Skillnaden är som störst för små anläggningar som precis täcker baslasten för fastigheten. Bästa vinkeln för dessa mindre system är 45°. Det är svårt att motivera det dyrare montaget för att vinkla upp modulerna då taket redan lutar nära optimalt. Skillnaden mellan köpt och sparad el är liten om skattereduktion ges. Det är dock oklart hur länge skattereduktionen varar så det är därför säkrare att dimensionera utifrån sitt eget elbehov. Ett solcellssystem dimensionerat för att sälja mycket överskottsel skulle kunna bli en stor förlustaffär. Om solkraft får större plats i Sveriges elproduktion kan den ge upphov till högre globala koldioxidutsläpp beroende på vilket energislag den ersätter. Samtidigt tar det längre tid i Sverige jämfört med andra länder innan en solcell kan beräknas koldioxidneutral då det redan är mycket låga koldioxidutsläpp i Sveriges elmix kombinerat med relativt låg solinstrålning. Det innebär att solkraft ur ett miljöperspektiv är tvivelaktigt i Sverige. / Solar power is estimated to be able to cover the whole earths energy demand many times without releasing greenhouse gases or other pollutants while they operate and is therefore considered a renewable energy source. Solar power is therefore a suitable replacement to today’s fossil based energy systems. The cost for solar cells have decreased a lot in recent years. At the same time Sweden and the European Union have goals set for reducing the amount of carbon dioxide released so a tax reduction is given to those who sells overproduced electricity from solar power to the grid. This means it could eventual be profitable even in Sweden for installing solar power even though the sun doesn’t shine as much that close to the poles. It is suitable to install solar panels at buildings since it is the building and service sector that uses the most electricity in Sweden. It must be profitable for private investors before they make the choice to invest in solar panels. Environmental benefits are not as attractive for private investors. Should the solar arrays be optimized in such way that the profit is maximized the investment is more likely to occur. This paper examines how an optimal solar cell system should look like from a profitable perspective for a building with 25 apartments in the middle of Sweden under different economic conditions. Most focus is directed towards how the tilt angle affects the results. The electricity production of different configurations of solar panels was simulated. These results were then compared to the electricity demand for the building so the amount used for self-consumption and how much is sold to the grid could be calculated and from that calculate how profitable that system is. Most electricity is produced at the tilt angle of 40°, but marginally more than 30° which the roof is tilted. The difference between tilt angles are just a few percent at most. The difference is most noticeable when the system is just big enough to cover the base electricity demand. The best tilt angle for those systems are 45°. It is hard to advocate for more expensive mounting for bigger systems since the roof already is close to the optimal tilt angle. The value for saved and sold electricity is very close to each other if tax reduction is given. It’s however uncertain for how long tax reduction will last. It is much safer to size a solar array to cover the building’s electricity demand. It could be very costly to size a solar array which relies upon selling electricity with today’s economic condition. However, solar power is questionable in Sweden in an environmental perspective.
630

Analyse et contrôle optimal d'un bioréacteur de digestion anaérobie / Optimal control and analysis of anaerobic digestion bioreactor

Ghouali, Amel 14 December 2015 (has links)
Cette thèse porte sur l'analyse et le contrôle optimal d'un digesteur anaérobie. L'objectif est de proposer une stratégie de commande optimale pour maximiser la quantité de biogaz produit dans un bioréacteur anaérobie sur une période de temps donnée. Plus particulièrement, à partir d'un modèle simple de bioprocédé et en considérant une classe importante de cinétiques de croissance, nous résolvons un problème de maximisation de biogaz produit par le système pendant un temps fixé, en utilisant le taux de dilution D(.) comme variable de contrôle. Dépendant des conditions initiales du système, l'analyse du problème de contrôle optimal fait apparaître des degrés de difficulté très divers. Dans une première partie, nous résolvons le problème dans le cas où le taux de dilution permettant de maximiser le débit de gaz à l'équilibre est à l'intérieur des bornes minimales et maximales du débit d'alimentation pouvant être appliqué au système : il s'agit du cas WDAC (Well Dimensioned Actuator Case). La synthèse du contrôle optimal est obtenue par une approche de comparaison de trajectoires d'un système dynamique. Une étude comparative des solutions exactes avec celle obtenues avec une approche numérique directe en utilisant le logiciel "BOCOP" est faite. Une comparaison des performances du contrôleur optimal avec celles obtenues en appliquant une loi heuristique est discutée. On montre en particulier que les deux lois de commande amènent le système vers le même point optimal. Dans une deuxième partie, dans le cas où l'actionneur est sous- (ou sur-) dimensionné, c'est-à-dire si la valeur du taux de dilution à appliquer pour obtenir le maximum de biogaz à l'équilibre est en dehors de la valeur minimale ou maximale de l'actionneur, alors nous définissons les cas UDAC (Uder dimensioned Actuator Case) et ODAC (Over Dimensioned Actuator Case) que nous résolvons en appliquant le principe du maximum de Pontryagin. / This thesis focuses on the optimal control of an anaerobic digestor for maximizing its biogas production. In particular, using a simple model of the anaerobic digestion process, we derive a control law to maximize the biogas production over a period of time using the dilution rate D(.) as the control variable. Depending on initial conditions and constraints on the actuator, the search for a solution to the optimal control problem reveals very different levels of difficulty. In the first part, we consider that there are no severe constraints on the actuator. In particular, the interval in which the input flow rate lives includes the value which allows the biogas to be maximized at equilibrium. For this case, named WDAC (Well Dimensioned Actuator Case) we solve the optimal control problem using classical tools of differential equations analysis.Numerical simulations illustrate the robustness of the control law with respect to several parameters, notably with respect to initial conditions. We use these results to show that an heuristic control law proposed in the literature is optimal in a certain sense. The optimal trajectories are then compared with those given by a purely numerical optimal control solver (i.e. the "BOCOP" toolkit) which is an open-source toolbox for solving optimal control problems. When the exact analytical solution to the optimal control problem cannot be found, we suggest that such numerical tool can be used to intuiter optimal solutions.In the second part, the problem of maximizing the biogas production is treated when the actuator is under (-over) dimensioned. These are the cases UDAC (Under Dimensioned Actuator Cases) and ODAC (Over Dimensioned Actuator Cases). Then we solve these optimal problems using the Maximum Principle of Pontryagin.

Page generated in 0.1472 seconds