• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1524
  • 363
  • 359
  • 194
  • 78
  • 48
  • 46
  • 39
  • 31
  • 26
  • 20
  • 18
  • 17
  • 13
  • 9
  • Tagged with
  • 3303
  • 1149
  • 436
  • 429
  • 323
  • 320
  • 303
  • 285
  • 269
  • 258
  • 236
  • 234
  • 218
  • 211
  • 205
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Optimal Electrodynamic Tether Phasing and Orbit-Raising Maneuvers

Bitzer, Matthew Scott 17 June 2009 (has links)
We present optimal solutions for a point-mass electrodynamic tether (EDT) performing phasing and orbit-raising maneuvers. An EDT is a conductive tether on the order of 20 km in length and uses a Lorentz force to provide propellantless thrust. We develop the optimal equations of motion using Pontryagin's Minimum Principle. We find numerical solutions using a global, stochastic optimization method called Adaptive Simulated Annealing. The method uses Markov chains and the system's cost function to narrow down the search space. Newton's Method brings the error in the residual to below a specific tolerance. We compare the EDT solutions to similar constant-thrust solutions and investigate the patterns in the solution space. The EDT phasing maneuver has invariance properties similar to constant-thrust phasing maneuvers. Analyzing the solution space reveals that the EDT is faster at performing phasing maneuvers but slower at performing orbit-raising maneuvers than constant-thrust spacecraft. Also several bifurcation lines occur in the solution spaces for all maneuvers studied. / Master of Science
212

Determination of Optimal Stable Channel Profiles

Vigilar, Gregorio G. Jr. 28 January 1997 (has links)
A numerical model which determines the geometry of a threshold channel was recently developed. Such a model is an important tool for designing unlined irrigation canals and channelization schemes, and is useful when considering flow regulation. However, its applicability is limited in that its continuously curving boundary does not allow for sediment transport, which is an essential feature of natural rivers and streams. That model has thus been modified to predict the shape and stress distribution of an optimal stable channel; a channel with a flat-bed region over which bedload transport occurs, and curving bank regions composed of particles that are all in a state of incipient motion. It is the combination of this channel geometry and the phenomenon of momentum-diffusion, that allows the present model to simulate the "stable bank, mobile bed" condition observed in rivers. The coupled equations of momentum-diffusion and force-balance are solved over the bank region to determine the shape of the channel banks (the bank solution). The width of the channel1s flat-bed region is determined by solving the momentum-diffusion equation over the flat-bed region (the bed solution), using conditions at the junction of the flat-bed and bank regions that ensure matching of the bed and bank solutions. The model was tested against available experimental and field data, and was found to adequately predict the bank shape and significant dimensions of stable channels. To make the model results more amenable to the practic ing engineer, design equations and plots were developed. These can be used as an alternative solution for stable channel design; relieving the practitioner of the need to run the numerical program. The case of a stable channel that transports both bedload and suspended sediment is briefly discussed. Governing equations and a possible solution scheme for this type of channel are suggested; laying the groundwork for the development of an appropriate numerical model. / Ph. D.
213

Learning-based Optimal Control of Time-Varying Linear Systems Over Large Time Intervals

Baddam, Vasanth Reddy January 2023 (has links)
We solve the problem of two-point boundary optimal control of linear time-varying systems with unknown model dynamics using reinforcement learning. Leveraging singular perturbation theory techniques, we transform the time-varying optimal control problem into two time-invariant subproblems. This allows the utilization of an off-policy iteration method to learn the controller gains. We show that the performance of the learning-based controller approximates that of the model-based optimal controller and the approximation accuracy improves as the control problem’s time horizon increases. We also provide a simulation example to verify the results / M.S. / We use reinforcement learning to find two-point boundary optimum controls for linear time-varying systems with uncertain model dynamics. We divided the LTV control problem into two LTI subproblems using singular perturbation theory techniques. As a result, it is possible to identify the controller gains via a learning technique. We show that the training-based controller’s performance approaches that of the model-based optimal controller, with approximation accuracy growing with the temporal horizon of the control issue. In addition, we provide a simulated scenario to back up our findings.
214

Optimal sizing and location of photovoltaic generators on three phase radial distribution feeder

Al-Sabounchi, Ammar M. Munir January 2011 (has links)
The aim of this work is to research the issue of optimal sizing and location of photovoltaic distributed generation (PVDG) units on radial distribution feeders, and develop new procedures by which the optimal location may be determined. The procedures consider the concept that the PVDG production varies independently from changes in feeder load demand. Based on that, the developed procedures deal with two performance curves; the feeder daily load curve driven by the consumer load demand, and the PVDG daily production curve driven by the solar irradiance. Due to the mismatch in the profile of these two curves the PVDG unit might end up producing only part of its capacity at the time the feeder meets its peak load demand. An actual example of that is the summer peak load demand in Abu Dhabi city that occurs at 5:30 pm, which is 5 hours after the time the PV array yields its peak. Consequently, solving the optimization problem for maximum line power loss reduction (∆PPL) is deemed inappropriate for the connection of PVDG units. Accordingly, the procedures have been designed to solve for maximum line energy loss reduction (∆EL). A suitable concept has been developed to rate the ∆EL at one time interval over the day, namely feasible optimization interval (FOI). The concept has been put into effect by rating the ∆EL in terms of line power loss reduction at the FOI (ΔPLFOI). This application is deemed very helpful in running the calculations with no need to repeat the energy-based calculations on hourly basis intervals or even shorter. The procedures developed as part of this work have been applied on actual feeders at the 11kV level of Abu Dhabi distribution network. Two main scenarios have been considered relating to the avoidance and allowance of reverse power flow (RPF). In this course, several applications employing both single and multiple PVDG units have been solved and validated. The optimization procedures are solved iteratively. Hence, effective sub-procedures to help determine the appropriate number of feasible iterative steps have been developed and incorporated successfully. Additionally, the optimization procedures have been designed to deal with a 3-phase feeder under an unbalanced load condition. The line impedances along the feeder are modeled in terms of a phase impedance matrix. At the same time, the modeling of feeder load curves along with the power flow calculations and the resulting losses in the lines are carried out by phase. The resulting benefits from each application have been evaluated and compared in terms of line power loss reduction at the FOI (∆PLFOI) along with voltage and current flow profile.
215

A study of stochastic differential equations and Fokker-Planck equations with applications

Li, Wuchen 27 May 2016 (has links)
Fokker-Planck equations, along with stochastic differential equations, play vital roles in physics, population modeling, game theory and optimization (finite or infinite dimensional). In this thesis, we study three topics, both theoretically and computationally, centered around them. In part one, we consider the optimal transport for finite discrete states, which are on a finite but arbitrary graph. By defining a discrete 2-Wasserstein metric, we derive Fokker-Planck equations on finite graphs as gradient flows of free energies. By using dynamical viewpoint, we obtain an exponential convergence result to equilibrium. This derivation provides tools for many applications, including numerics for nonlinear partial differential equations and evolutionary game theory. In part two, we introduce a new stochastic differential equation based framework for optimal control with constraints. The framework can efficiently solve several real world problems in differential games and Robotics, including the path-planning problem. In part three, we introduce a new noise model for stochastic oscillators. With this model, we prove global boundedness of trajectories. In addition, we derive a pair of associated Fokker-Planck equations.
216

Degradation of cellulosic material by Cellulomonas fimi

Kane, Steven Daniel January 2015 (has links)
The world stocks of fossil fuels are dwindling and may be all but out before the end of the century. Despite this there is increasing demand for them to be used for transport, and the ever increasing green house gases which their use produces. Renewable and less environmentally damaging forms of fuel are needed. Biofuels, particularly bioethanol, are a possibility to subsidise or replace fossil fuels altogether. Ethanol produced from fermentation of starch sugars from corn are already in wide use. As this bioethanol is currently produced from crops such as corn and sugar cane, that puts fuel crops in direct competition for space and resources with food crops. This has led to increases in food prices and the search for more arable land. Hydrolysis of lignocellulosic biomass, a waste by-product of many industries, to produce the sugars necessary for ethanol production would ease many of the problems with current biofuels. Degradation of lignocellulose is not simple and requires expensive chemical pre-treatments and large quantities of enzymes usually from fungal species making it about 10 times more expensive to produce than corn starch bioethanol. The production of a consolidated bioprocessor, an organism able to degrade, metabolise and ferment cellulosic material to produce ethanol or other useful products would greatly reduce the cost currently associated with lignocellulosic biofuel. Cellulomonas fimi ATCC 484 is an actinomycete soil bacterium able to degrade efficiently cellulosic material. The US Department of Energy (DOE) released the genome sequence at the start of 2012. In this thesis the released genome has been searched, for genes annotated as encoding polysaccharide degrading enzymes as well as for metabolic pathways. Over 100 genes predicted to code for polysaccharide hydrolysing enzymes were identified. Fifteen of these genes have been cloned as BioBricks, the standard synthetic biology functional unit, expressed in E. coli and C. freundii and assayed for endo β-1,4-glucanase activity using RBB-CMC, endo β-1,4-xylanase activity using RBB-xylan, β-D-xylosidase activity using ONPX, β-D-cellobiohydrolase activity using ONPC and α-L-arabinofuranosidase activity using PNPA. Eleven enzymes not previously reported from C. fimi were identified as active on a substrate with the strongest activities being for 2 arabinofuranosidases (AfsA+B), 4 β-xylosidases (BxyC, BxyF, CelE and XynH), an endoglucanase (CelA), and 2 multifunctional enzymes CelD and XynF, active as cellobiohydrolases, xylosidases and endoxylanases. Four enzymes were purified from E. coli cell lysates and characterised. It was found that AfsB has an optimum activity at pH 6.5 and 45ºC, BxyF has optimum activity at pH 6.0 and 45ºC and XynH has optimum activity at pH 9.0 and 80ºC. XynF exhibited different optima for the 3 substrates with pH 6.0 and 60ºC for ONPC, pH 4.5 and 50ºC for ONPX and pH 5.5 and 40ºC for RBB-xylan. Searching the genome and screening genes for activities will help genome annotation in the future by increasing the number of positively annotated genes in the databases. The BioBrick format is well suited for rapid cloning and expression of genes to be classified. Searching and screening the genome has also given insights into the complex and large network of enzymes required to fully hydrolyse and metabolise the sugars released from lignocellulose. These enzymes are spread across many different glycosyl hydrolase families conferring different catalytic activities. The characterisation of these novel enzymes points towards a system adapted to not only a broad specificity of substrate but also environmental factors such as high temperature and pH. Genomic analysis revealed gene clusters and traits which could be used in the design of a synthetic cellulolytic network, or for the conversion of C. fimi into a consolidated bioprocessor itself.
217

Optimal Parsing for dictionary text compression / Parsing optimal pour la compression du texte par dictionnaire

Langiu, Alessio 03 April 2012 (has links)
Les algorithmes de compression de données basés sur les dictionnaires incluent une stratégie de parsing pour transformer le texte d'entrée en une séquence de phrases du dictionnaire. Etant donné un texte, un tel processus n'est généralement pas unique et, pour comprimer, il est logique de trouver, parmi les parsing possibles, celui qui minimise le plus le taux de compression finale. C'est ce qu'on appelle le problème du parsing. Un parsing optimal est une stratégie de parsing ou un algorithme de parsing qui résout ce problème en tenant compte de toutes les contraintes d'un algorithme de compression ou d'une classe d'algorithmes de compression homogène. Les contraintes de l'algorithme de compression sont, par exemple, le dictionnaire lui-même, c'est-à-dire l'ensemble dynamique de phrases disponibles, et combien une phrase pèse sur le texte comprimé, c'est-à-dire quelle est la longueur du mot de code qui représente la phrase, appelée aussi le coût du codage d'un pointeur de dictionnaire. En plus de 30 ans d'histoire de la compression de texte par dictionnaire, une grande quantité d'algorithmes, de variantes et d'extensions sont apparus. Cependant, alors qu'une telle approche de la compression du texte est devenue l'une des plus appréciées et utilisées dans presque tous les processus de stockage et de communication, seuls quelques algorithmes de parsing optimaux ont été présentés. Beaucoup d'algorithmes de compression manquent encore d'optimalité pour leur parsing, ou du moins de la preuve de l'optimalité. Cela se produit parce qu'il n'y a pas un modèle général pour le problème de parsing qui inclut tous les algorithmes par dictionnaire et parce que
les parsing optimaux existants travaillent sous des hypothèses trop restrictives. Ce travail focalise sur le problème de parsing et présente à la fois un modèle général pour la compression des textes basée sur les dictionnaires appelé la théorie Dictionary-Symbolwise et un algorithme général de parsing qui a été prouvé être optimal sous certaines hypothèses réalistes. Cet algorithme est appelé Dictionary-Symbolwise Flexible Parsing et couvre pratiquement tous les cas des algorithmes de compression de texte basés sur dictionnaire ainsi que la grande classe de leurs variantes où le texte est décomposé en une séquence de symboles et de phrases du dictionnaire. Dans ce travail, nous avons aussi considéré le cas d'un mélange libre d'un compresseur par dictionnaire et d'un compresseur symbolwise. Notre Dictionary-Symbolwise Flexible Parsing couvre également ce cas-ci. Nous avons bien un algorithme de parsing optimal dans le cas de compression Dictionary-Symbolwise où le dictionnaire est fermé par préfixe et le coût d'encodage des pointeurs du dictionnaire est variable. Le compresseur symbolwise est un compresseur symbolwise classique qui fonctionne en temps linéaire, comme le sont de nombreux codeurs communs à longueur variable. Notre algorithme fonctionne sous l'hypothèse qu'un graphe spécial, qui sera décrit par la suite, soit bien défini. Même si cette condition n'est pas remplie, il est possible d'utiliser la même méthode pour obtenir des parsing presque optimaux. Dans le détail, lorsque le dictionnaire est comme LZ78, nous montrons comment mettre en œuvre notre algorithme en temps linéaire. Lorsque le dictionnaire est comme LZ77 notre algorithme peut être mis en œuvre en temps O (n log 
n) où n est le longueur du texte. Dans les deux cas, la complexité en espace est O (n). Même si l'objectif principal de ce travail est de nature théorique, des résultats expérimentaux seront présentés pour souligner certains effets pratiques de l'optimalité du parsing sur les performances de compression et quelques résultats expérimentaux plus détaillés sont mis dans une annexe appropriée / Dictionary-based compression algorithms include a parsing strategy to transform the input text into a sequence of dictionary phrases. Given a text, such process usually is not unique and, for compression purpose, it makes sense to find one of the possible parsing that minimizes the final compression ratio. This is the parsing problem. An optimal parsing is a parsing strategy or a parsing algorithm that solve the parsing problem taking account of all the constraints of a compression algorithm or of a class of homogeneous compression algorithms. Compression algorithm constrains are, for instance, the dictionary itself, i.e. the dynamic set of available phrases, and how much a phrase weight on the compressed text, i.e. the length of the codeword that represent such phrase also denoted as the cost of a dictionary pointer encoding. In more than 30th years of history of dictionary based text compression, while plenty of algorithms, variants and extensions appeared and while such approach to text compression become one of the most appreciated and utilized in almost all the storage and communication process, only few optimal parsing algorithms was presented. Many compression algorithms still leaks optimality of their parsing or, at least, proof of optimality. This happens because there is not a general model of the parsing problem that includes all the dictionary based algorithms and because the existing optimal parsings work under too restrictive hypothesis. This work focus on the parsing problem and presents both a general model for dictionary based text compression called Dictionary-Symbolwise theory and a general parsing algorithm that is proved to be optimal under some realistic hypothesis. This algorithm is called Dictionary-Symbolwise Flexible Parsing and it covers almost all the cases of dictionary based text compression algorithms together with the large class of their variants where the text is decomposed in a sequence of symbols and dictionary phrases.In this work we further consider the case of a free mixture of a dictionary compressor and a symbolwise compressor. Our Dictionary-Symbolwise Flexible Parsing covers also this case. We have indeed an optimal parsing algorithm in the case of dictionary-symbolwise compression where the dictionary is prefix closed and the cost of encoding dictionary pointer is variable. The symbolwise compressor is any classical one that works in linear time, as many common variable-length encoders do. Our algorithm works under the assumption that a special graph that will be described in the following, is well defined. Even if this condition is not satisfied it is possible to use the same method to obtain almost optimal parses. In detail, when the dictionary is LZ78-like, we show how to implement our algorithm in linear time. When the dictionary is LZ77-like our algorithm can be implemented in time O(n log n). Both have O(n) space complexity. Even if the main aim of this work is of theoretical nature, some experimental results will be introduced to underline some practical effects of the parsing optimality in compression performance and some more detailed experiments are hosted in a devoted appendix
218

Transport numérique de quantités géométriques / Numerical transport of geometrics quantities

Lepoultier, Guilhem 25 September 2014 (has links)
Une part importante de l’activité en calcul scientifique et analyse numérique est consacrée aux problèmes de transport d’une quantité par un champ donné (ou lui-même calculé numériquement). Les questions de conservations étant essentielles dans ce domaine, on formule en général le problème de façon eulérienne sous la forme d’un bilan au niveau de chaque cellule élémentaire du maillage, et l’on gère l’évolution en suivant les valeurs moyennes dans ces cellules au cours du temps. Une autre approche consiste à suivre les caractéristiques du champ et à transporter les valeurs ponctuelles le long de ces caractéristiques. Cette approche est délicate à mettre en oeuvre, n’assure pas en général une parfaite conservation de la matière transportée, mais peut permettre dans certaines situations de transporter des quantités non régulières avec une grande précision, et sur des temps très longs (sans conditions restrictives sur le pas de temps comme dans le cas des méthodes eulériennes). Les travaux de thèse présentés ici partent de l’idée suivante : dans le cadre des méthodes utilisant un suivi de caractéristiques, transporter une quantité supplémentaire géométrique apportant plus d’informations sur le problème (on peut penser à un tenseur des contraintes dans le contexte de la mécanique des fluides, une métrique sous-jacente lors de l’adaptation de maillage, etc. ). Un premier pan du travail est la formulation théorique d’une méthode de transport de telles quantités. Elle repose sur le principe suivant : utiliser la différentielle du champ de transport pour calculer la différentielle du flot, nous donnant une information sur la déformation locale du domaine nous permettant de modifier nos quantités géométriques. Cette une approche a été explorée dans dans le contexte des méthodes particulaires plus particulièrement dans le domaine de la physique des plasmas. Ces premiers travaux amènent à travailler sur des densités paramétrées par un couple point/tenseur, comme les gaussiennes par exemple, qui sont un contexte d’applications assez naturelles de la méthode. En effet, on peut par la formulation établie transporter le point et le tenseur. La question qui se pose alors et qui constitue le second axe de notre travail est celle du choix d’une distance sur des espaces de densités, permettant par exemple d’étudier l’erreur commise entre la densité transportée et son approximation en fonction de la « concentration » au voisinage du point. On verra que les distances Lp montrent des limites par rapport au phénomène que nous souhaitons étudier. Cette étude repose principalement sur deux outils, les distances de Wasserstein, tirées de la théorie du transport optimal, et la distance de Fisher, au carrefour des statistiques et de la géométrie différentielle. / In applied mathematics, question of moving quantities by vector is an important question : fluid mechanics, kinetic theory… Using particle methods, we're going to move an additional quantity giving more information on the problem. First part of the work is the theorical formulation for this kind of transport. It's going to use the differential in space of the vector field to compute the differential of the flow. An immediate and natural application is density who are parametrized by and point and a tensor, like gaussians. We're going to move such densities by moving point and tensor. Natural question is now the accuracy of such approximation. It's second part of our work , which discuss of distance to estimate such type of densities.
219

On multi-contact dynamic motion using reduced models / Locomotion dynamique en multi-contact par modèles réduits

Audren, Hervé 14 November 2017 (has links)
Pour les robots marcheurs, c'est à dire bipèdes, quadrupèdes, hexapodes, etc... la notion de stabilité est primordiale. En effet, ces robots possèdent une base flottante sous-actuée : il leur faut prendre appui sur l'environnement pour se mouvoir. Toutefois, cette caractéristique les rend vulnérables: ils peuvent tomber. Il est donc indispensable de pouvoir différencier un mouvement stable d'un mouvement non-stable. Dans cette thèse, la stabilité est considérée du point de vue d'un modèle réduit au Centre de Masse (ou Centre de Gravité). Nous montrons dans un premier temps comment calculer la zone de stabilité de ce modèle dans le cas statique. Bien que cette région soit un objet purement géométrique, nous montrons qu'elle dépend des forces de contact admissibles. Ensuite, nous montrons qu'introduire la notion de robustesse, c'est à dire une marge d'incertitude sur les accélérations (ou les forces de contacts) transforme la forme plane du cas statique en un volume tridimensionnel. Afin de calculer cette forme, nous présentons de nouveaux algorithmes récursifs. Nous appliquons ensuite des algorithmes provenant de l'infographie qui permettent de déformer continûment ces objets géométriques. Cette transformation nous permet d'approximer des changements dans les variables influençant ces formes. Calculer le volume de stabilité explicitement nous permet de découpler les accélérations des positions du CdM, ce qui nous permet de formuler un problème de contrôle prédictif linéaire. Nous proposons aussi une autre formulation linéaire qui, au prix de calculs plus coûteux, permet d'exploiter pleinement la dynamique du robot. Enfin, nous appliquons ces résultats dans une approche hiérarchique qui nous permet de générer des mouvements du corps complet du robot, aussi bien sur une véritable plateforme humanoïde qu'en simulation. / In the context of legged robotics, stability (or equilibrium) is of the utmost importance. Indeed, as legged robots have a non-actuated floating base they can fall. To avoid falling, we must be able to tell apart stable from non-stable motion. This thesis approaches stability from a reduced model point-of-view: our main interest is the Center of Mass. We show how to compute stability regions for this reduced model, at first based on purely static stability. Although purely geometrical in nature, we show how they depend on the admissible contact forces. Then, we show that taking into account robustness, in the sense of acceleration (or contact forces) affordances transforms the usual two-dimensional stability region into a three dimensional one. To compute this shape, we introduce novel recursive algorithms. We show how we can apply computer graphics techniques for shape morphing in order to continuously deform the aforementioned regions. This allows us to approximate changes in the parameters of those shapes, but also to interpolate between shapes. Finally, we exploit the effective decoupling offered by the explicit computation of the stability polyhedron to formulate a linear, minimal jerk model-predictive control problem. We also propose another linear MPC problem that exploits more of the available dynamics, but at an increased computational cost. We then adopt a hierarchical approach, and use those CoM results as input to our whole-body controller. Results are demonstrated on real hardware and in simulation.
220

Corporate valuation and optimal operation under liquidity constraints

Cheng, Mingliang January 2016 (has links)
We investigate the impact of cash reserves upon the optimal behaviour of a modelled firm that has uncertain future revenues. To achieve this, we build up a corporate financing model of a firm from a Real Options foundation, with the option to close as a core business decision maintained throughout. We model the firm by employing an optimal stochastic control mathematical approach, which is based upon a partial differential equations perspective. In so doing, we are able to assess the incremental impacts upon the optimal operation of the cash constrained firm, by sequentially including: an optimal dividend distribution; optimal equity financing; and optimal debt financing (conducted in a novel equilibrium setting between firm and creditor). We present efficient numerical schemes to solve these models, which are generally built from the Projected Successive Over Relaxation (PSOR) method, and the Semi-Lagrangian approach. Using these numerical tools, and our gained economic insights, we then allow the firm the option to also expand the operation, so they may also take advantage of favourable economic conditions.

Page generated in 0.0441 seconds