Spelling suggestions: "subject:"anoptimal"" "subject:"enoptimal""
211 |
Optimal Paths in Gliding FlightWolek, Artur 28 May 2015 (has links)
Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning.
The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii.
First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated.
Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability. / Ph. D.
|
212 |
Optimal Electrodynamic Tether Phasing and Orbit-Raising ManeuversBitzer, Matthew Scott 17 June 2009 (has links)
We present optimal solutions for a point-mass electrodynamic tether (EDT) performing phasing and orbit-raising maneuvers. An EDT is a conductive tether on the order of 20 km in length and uses a Lorentz force to provide propellantless thrust. We develop the optimal equations of motion using Pontryagin's Minimum Principle. We find numerical solutions using a global, stochastic optimization method called Adaptive Simulated Annealing. The method uses Markov chains and the system's cost function to narrow down the search space. Newton's Method brings the error in the residual to below a specific tolerance. We compare the EDT solutions to similar constant-thrust solutions and investigate the patterns in the solution space. The EDT phasing maneuver has invariance properties similar to constant-thrust phasing maneuvers. Analyzing the solution space reveals that the EDT is faster at performing phasing maneuvers but slower at performing orbit-raising maneuvers than constant-thrust spacecraft. Also several bifurcation lines occur in the solution spaces for all maneuvers studied. / Master of Science
|
213 |
Determination of Optimal Stable Channel ProfilesVigilar, Gregorio G. Jr. 28 January 1997 (has links)
A numerical model which determines the geometry of a threshold channel was recently developed. Such a model is an important tool for designing unlined irrigation canals and channelization schemes, and is useful when considering flow regulation. However, its applicability is limited in that its continuously curving boundary does not allow for sediment transport, which is an essential feature of natural rivers and streams. That model has thus been modified to predict the shape and stress distribution of an optimal stable channel; a channel with a flat-bed region over which bedload transport occurs, and curving bank regions composed of particles that are all in a state of incipient motion. It is the combination of this channel geometry and the phenomenon of momentum-diffusion, that allows the present model to simulate the "stable bank, mobile bed" condition observed in rivers. The coupled equations of momentum-diffusion and force-balance are solved over the bank region to determine the shape of the channel banks (the bank solution). The width of the channel1s flat-bed region is determined by solving the momentum-diffusion equation over the flat-bed region (the bed solution), using conditions at the junction of the flat-bed and bank regions that ensure matching of the bed and bank solutions. The model was tested against available experimental and field data, and was found to adequately predict the bank shape and significant dimensions of stable channels. To make the model results more amenable to the practic ing engineer, design equations and plots were developed. These can be used as an alternative solution for stable channel design; relieving the practitioner of the need to run the numerical program. The case of a stable channel that transports both bedload and suspended sediment is briefly discussed. Governing equations and a possible solution scheme for this type of channel are suggested; laying the groundwork for the development of an appropriate numerical model. / Ph. D.
|
214 |
Learning-based Optimal Control of Time-Varying Linear Systems Over Large Time IntervalsBaddam, Vasanth Reddy January 2023 (has links)
We solve the problem of two-point boundary optimal control of linear time-varying systems with unknown model dynamics using reinforcement learning. Leveraging singular perturbation theory techniques, we transform the time-varying optimal control problem into two time-invariant subproblems. This allows the utilization of an off-policy iteration method to learn the controller gains. We show that the performance of the learning-based controller approximates that of the model-based optimal controller and the approximation accuracy improves as the control problem’s time horizon increases. We also provide a simulation example to verify the results / M.S. / We use reinforcement learning to find two-point boundary optimum controls for linear time-varying systems with uncertain model dynamics. We divided the LTV control problem into two LTI subproblems using singular perturbation theory techniques. As a result, it is possible to identify the controller gains via a learning technique. We show that the training-based controller’s performance approaches that of the model-based optimal controller, with approximation accuracy growing with the temporal horizon of the control issue. In addition, we provide a simulated scenario to back up our findings.
|
215 |
Optimal Transport Theory and Applications to Unsupervised Learning and Asset PricingMarcelo Cruz de Souza (19207069) 30 July 2024 (has links)
<p dir="ltr">This thesis presents results in Optimal Transport theory and applications to unsuper-
vised statistical learning and robust asset pricing. In unsupervised learning applications,
we assume that we observe the distribution of some data of interest which might be too
big in size, have a high-dimensional structure or be polluted with noise. We investigate the
construction of an optimal distribution that precedes the given data distribution in convex
order, which means that the given distribution is a dispersion of it. The intention is to
use this construction to estimate a concise, lower-dimensional or unpolluted version of the
given data. We provide existence and convergence results and show that popular methods
including k-means and principal curves can be unified under this model. We further investi-
gate a relaxation of the order relation that leads to similar results in terms of existence and
convergence and broadens the range of applications to include e.g. the Principal Compo-
nent Analysis and the Factor model. We relate the two versions and show that the relaxed
problem can be described as a bilinear optimization with a tractable computational method.
As examples, we apply our method to generate fixed-weight k-means, principal curves with
bounded curvature that are actual generalizations of PCA, and a latent factor structure
in a classical Gaussian setting. In robust finance applications, we investigate the Vectorial
Martingale Optimal Transport problem, the geometry of its solutions, and its application to
model-free asset pricing. We consider a multi-asset, two-period contract pricing model and
show that the solution to this problem with a sub or supermodular payoff function reduces
to a single factor in the first period in the case of two underlying assets (d = 2), but not in
general for a greater number of assets. This result for d = 2 enables the construction of a
joint distribution of prices at the first period from market data, which adds information to
the model-free pricing method and reduces the computational dimensionality. We provide
an improved version of an existing pricing method and show numerical evidence of increased
accuracy.</p>
|
216 |
Scalable Combinatorial Algorithms for Optimal Transport Based Similarity MetricsZhang, Kaiyi 22 November 2024 (has links)
Optimal Transport (OT), also known as Wasserstein distance, is a valuable metric for comparing probability distributions. Owing to its appealing statistical properties, researchers in various fields, such as machine learning, use OT within applications. However, computing both exact and approximate OT is computationally expensive and impractical for large datasets. Furthermore, OT is sensitive to small noise in the input distributions. In this document, we propose to use combinatorial methods to design scalable and noise-resistant solutions for OT.
We present four key contributions in this work. First, we introduce a novel combinatorial parallel algorithm for approximating OT, which achieves a parallel time complexity of $O(log n/varepsilon^2)$, where $n$ is the input size and $varepsilon$ is the addtitive error, Our algorithm outperforms the state-of-the-art in experiments. Second, we propose a new concept, OT-profile, representing the function of minimum partial optimal transport cost $sigma_alpha$ versus the transported mass $alpha$. This can be used to identify outliers in real-world data. The utility of OT-profile is demonstrated in outlier detection and PU-learning jobs and outperforms the state-of-the-art. Third, building upon the OT-profile, we propose a new OT-based metric for comparing distributions that is more robust to noise. This metric preserves desirable properties while reducing its sensitivity to noise for high $p$ values, providing a robust solution for real-world datasets. Lastly, we have developed a Python library that integrates our algorithms and methods into a user-friendly framework, making it easier for practitioners to adopt our methods. Our work enhances the computational efficiency and robustness of OT, making it practical for machine learning applicaitons. / Doctor of Philosophy / Imagine a task in which you are required to fill up a set of holes with piles of sand using the least amount of effort. The idea behind this is Optimal Transport (OT), also known as the Wasserstein distance. This distance is a popular math tool that measures the similarity (or distance) between two probability distributions. This tool has many applications in different areas, including machine learning and data analysis. For example, it is employed in generative models to measure the difference between generated and real images, or in analyzing real-word datasets containing noise.
However, the practice of OT faces two major problems. First, the computation becomes expensive with larger datasets, making it infeasible for large-scale applications. Therefore, it is important to make the computation of OT more efficient in processing a large volume of data. Second, OT is sensitive to outliers or noise in the input distributions, which can significantly influence the results.
In this thesis, we try to address these challenges through four main contributions: First, we present novel parallel combinatorial algorithms that reduce the OT computational burden. Our approach achieves a state-of-the-art time complexity and, at the same time, is easy to implement. Second, we introduce an innovative method for computing an 'OT-profile', which enables the to identify the outliers from input distributions. This approach improves the robustness of OT in real-world scenarios where data always contains noise and perturbation.
Third, building on the OT-profile concept, we propose a new robust OT-based metric for comparing distributions in the presence of noise. To facilitate the impact of these advancements, we have open-sourced a Python library that implements our methods in a user-friendly interface, making it easier for researchers and practitioners to apply our solutions to their data analysis tasks. This work not only advances the theoretical analysis of OT but also contributes to future practical applications. By addressing the two challenges in scalability and robustness, we made the application of OT more robust and easier with large-scale data.
|
217 |
Optimal sizing and location of photovoltaic generators on three phase radial distribution feederAl-Sabounchi, Ammar M. Munir January 2011 (has links)
The aim of this work is to research the issue of optimal sizing and location of photovoltaic distributed generation (PVDG) units on radial distribution feeders, and develop new procedures by which the optimal location may be determined. The procedures consider the concept that the PVDG production varies independently from changes in feeder load demand. Based on that, the developed procedures deal with two performance curves; the feeder daily load curve driven by the consumer load demand, and the PVDG daily production curve driven by the solar irradiance. Due to the mismatch in the profile of these two curves the PVDG unit might end up producing only part of its capacity at the time the feeder meets its peak load demand. An actual example of that is the summer peak load demand in Abu Dhabi city that occurs at 5:30 pm, which is 5 hours after the time the PV array yields its peak. Consequently, solving the optimization problem for maximum line power loss reduction (∆PPL) is deemed inappropriate for the connection of PVDG units. Accordingly, the procedures have been designed to solve for maximum line energy loss reduction (∆EL). A suitable concept has been developed to rate the ∆EL at one time interval over the day, namely feasible optimization interval (FOI). The concept has been put into effect by rating the ∆EL in terms of line power loss reduction at the FOI (ΔPLFOI). This application is deemed very helpful in running the calculations with no need to repeat the energy-based calculations on hourly basis intervals or even shorter. The procedures developed as part of this work have been applied on actual feeders at the 11kV level of Abu Dhabi distribution network. Two main scenarios have been considered relating to the avoidance and allowance of reverse power flow (RPF). In this course, several applications employing both single and multiple PVDG units have been solved and validated. The optimization procedures are solved iteratively. Hence, effective sub-procedures to help determine the appropriate number of feasible iterative steps have been developed and incorporated successfully. Additionally, the optimization procedures have been designed to deal with a 3-phase feeder under an unbalanced load condition. The line impedances along the feeder are modeled in terms of a phase impedance matrix. At the same time, the modeling of feeder load curves along with the power flow calculations and the resulting losses in the lines are carried out by phase. The resulting benefits from each application have been evaluated and compared in terms of line power loss reduction at the FOI (∆PLFOI) along with voltage and current flow profile.
|
218 |
A study of stochastic differential equations and Fokker-Planck equations with applicationsLi, Wuchen 27 May 2016 (has links)
Fokker-Planck equations, along with stochastic differential equations, play vital roles in physics, population modeling, game theory and optimization (finite or infinite dimensional). In this thesis, we study three topics, both theoretically and computationally, centered around them. In part one, we consider the optimal transport for finite discrete states, which are on a finite but arbitrary graph. By defining a discrete 2-Wasserstein metric, we derive Fokker-Planck equations on finite graphs as gradient flows of free energies. By using dynamical viewpoint, we obtain an exponential convergence result to equilibrium. This derivation provides tools for many applications, including numerics for nonlinear partial differential equations and evolutionary game theory. In part two, we introduce a new stochastic differential equation based framework for optimal control with constraints. The framework can efficiently solve several real world problems in differential games and Robotics, including the path-planning problem. In part three, we introduce a new noise model for stochastic oscillators. With this model, we prove global boundedness of trajectories. In addition, we derive a pair of associated Fokker-Planck equations.
|
219 |
Degradation of cellulosic material by Cellulomonas fimiKane, Steven Daniel January 2015 (has links)
The world stocks of fossil fuels are dwindling and may be all but out before the end of the century. Despite this there is increasing demand for them to be used for transport, and the ever increasing green house gases which their use produces. Renewable and less environmentally damaging forms of fuel are needed. Biofuels, particularly bioethanol, are a possibility to subsidise or replace fossil fuels altogether. Ethanol produced from fermentation of starch sugars from corn are already in wide use. As this bioethanol is currently produced from crops such as corn and sugar cane, that puts fuel crops in direct competition for space and resources with food crops. This has led to increases in food prices and the search for more arable land. Hydrolysis of lignocellulosic biomass, a waste by-product of many industries, to produce the sugars necessary for ethanol production would ease many of the problems with current biofuels. Degradation of lignocellulose is not simple and requires expensive chemical pre-treatments and large quantities of enzymes usually from fungal species making it about 10 times more expensive to produce than corn starch bioethanol. The production of a consolidated bioprocessor, an organism able to degrade, metabolise and ferment cellulosic material to produce ethanol or other useful products would greatly reduce the cost currently associated with lignocellulosic biofuel. Cellulomonas fimi ATCC 484 is an actinomycete soil bacterium able to degrade efficiently cellulosic material. The US Department of Energy (DOE) released the genome sequence at the start of 2012. In this thesis the released genome has been searched, for genes annotated as encoding polysaccharide degrading enzymes as well as for metabolic pathways. Over 100 genes predicted to code for polysaccharide hydrolysing enzymes were identified. Fifteen of these genes have been cloned as BioBricks, the standard synthetic biology functional unit, expressed in E. coli and C. freundii and assayed for endo β-1,4-glucanase activity using RBB-CMC, endo β-1,4-xylanase activity using RBB-xylan, β-D-xylosidase activity using ONPX, β-D-cellobiohydrolase activity using ONPC and α-L-arabinofuranosidase activity using PNPA. Eleven enzymes not previously reported from C. fimi were identified as active on a substrate with the strongest activities being for 2 arabinofuranosidases (AfsA+B), 4 β-xylosidases (BxyC, BxyF, CelE and XynH), an endoglucanase (CelA), and 2 multifunctional enzymes CelD and XynF, active as cellobiohydrolases, xylosidases and endoxylanases. Four enzymes were purified from E. coli cell lysates and characterised. It was found that AfsB has an optimum activity at pH 6.5 and 45ºC, BxyF has optimum activity at pH 6.0 and 45ºC and XynH has optimum activity at pH 9.0 and 80ºC. XynF exhibited different optima for the 3 substrates with pH 6.0 and 60ºC for ONPC, pH 4.5 and 50ºC for ONPX and pH 5.5 and 40ºC for RBB-xylan. Searching the genome and screening genes for activities will help genome annotation in the future by increasing the number of positively annotated genes in the databases. The BioBrick format is well suited for rapid cloning and expression of genes to be classified. Searching and screening the genome has also given insights into the complex and large network of enzymes required to fully hydrolyse and metabolise the sugars released from lignocellulose. These enzymes are spread across many different glycosyl hydrolase families conferring different catalytic activities. The characterisation of these novel enzymes points towards a system adapted to not only a broad specificity of substrate but also environmental factors such as high temperature and pH. Genomic analysis revealed gene clusters and traits which could be used in the design of a synthetic cellulolytic network, or for the conversion of C. fimi into a consolidated bioprocessor itself.
|
220 |
Optimal Parsing for dictionary text compression / Parsing optimal pour la compression du texte par dictionnaireLangiu, Alessio 03 April 2012 (has links)
Les algorithmes de compression de données basés sur les dictionnaires incluent une stratégie de parsing pour transformer le texte d'entrée en une séquence de phrases du dictionnaire. Etant donné un texte, un tel processus n'est généralement pas unique et, pour comprimer, il est logique de trouver, parmi les parsing possibles, celui qui minimise le plus le taux de compression finale. C'est ce qu'on appelle le problème du parsing. Un parsing optimal est une stratégie de parsing ou un algorithme de parsing qui résout ce problème en tenant compte de toutes les contraintes d'un algorithme de compression ou d'une classe d'algorithmes de compression homogène. Les contraintes de l'algorithme de compression sont, par exemple, le dictionnaire lui-même, c'est-à-dire l'ensemble dynamique de phrases disponibles, et combien une phrase pèse sur le texte comprimé, c'est-à-dire quelle est la longueur du mot de code qui représente la phrase, appelée aussi le coût du codage d'un pointeur de dictionnaire. En plus de 30 ans d'histoire de la compression de texte par dictionnaire, une grande quantité d'algorithmes, de variantes et d'extensions sont apparus. Cependant, alors qu'une telle approche de la compression du texte est devenue l'une des plus appréciées et utilisées dans presque tous les processus de stockage et de communication, seuls quelques algorithmes de parsing optimaux ont été présentés. Beaucoup d'algorithmes de compression manquent encore d'optimalité pour leur parsing, ou du moins de la preuve de l'optimalité. Cela se produit parce qu'il n'y a pas un modèle général pour le problème de parsing qui inclut tous les algorithmes par dictionnaire et parce que
les parsing optimaux existants travaillent sous des hypothèses trop restrictives. Ce travail focalise sur le problème de parsing et présente à la fois un modèle général pour la compression des textes basée sur les dictionnaires appelé la théorie Dictionary-Symbolwise et un algorithme général de parsing qui a été prouvé être optimal sous certaines hypothèses réalistes. Cet algorithme est appelé Dictionary-Symbolwise Flexible Parsing et couvre pratiquement tous les cas des algorithmes de compression de texte basés sur dictionnaire ainsi que la grande classe de leurs variantes où le texte est décomposé en une séquence de symboles et de phrases du dictionnaire. Dans ce travail, nous avons aussi considéré le cas d'un mélange libre d'un compresseur par dictionnaire et d'un compresseur symbolwise. Notre Dictionary-Symbolwise Flexible Parsing couvre également ce cas-ci. Nous avons bien un algorithme de parsing optimal dans le cas de compression Dictionary-Symbolwise où le dictionnaire est fermé par préfixe et le coût d'encodage des pointeurs du dictionnaire est variable. Le compresseur symbolwise est un compresseur symbolwise classique qui fonctionne en temps linéaire, comme le sont de nombreux codeurs communs à longueur variable. Notre algorithme fonctionne sous l'hypothèse qu'un graphe spécial, qui sera décrit par la suite, soit bien défini. Même si cette condition n'est pas remplie, il est possible d'utiliser la même méthode pour obtenir des parsing presque optimaux. Dans le détail, lorsque le dictionnaire est comme LZ78, nous montrons comment mettre en œuvre notre algorithme en temps linéaire. Lorsque le dictionnaire est comme LZ77 notre algorithme peut être mis en œuvre en temps O (n log
n) où n est le longueur du texte. Dans les deux cas, la complexité en espace est O (n). Même si l'objectif principal de ce travail est de nature théorique, des résultats expérimentaux seront présentés pour souligner certains effets pratiques de l'optimalité du parsing sur les performances de compression et quelques résultats expérimentaux plus détaillés sont mis dans une annexe appropriée / Dictionary-based compression algorithms include a parsing strategy to transform the input text into a sequence of dictionary phrases. Given a text, such process usually is not unique and, for compression purpose, it makes sense to find one of the possible parsing that minimizes the final compression ratio. This is the parsing problem. An optimal parsing is a parsing strategy or a parsing algorithm that solve the parsing problem taking account of all the constraints of a compression algorithm or of a class of homogeneous compression algorithms. Compression algorithm constrains are, for instance, the dictionary itself, i.e. the dynamic set of available phrases, and how much a phrase weight on the compressed text, i.e. the length of the codeword that represent such phrase also denoted as the cost of a dictionary pointer encoding. In more than 30th years of history of dictionary based text compression, while plenty of algorithms, variants and extensions appeared and while such approach to text compression become one of the most appreciated and utilized in almost all the storage and communication process, only few optimal parsing algorithms was presented. Many compression algorithms still leaks optimality of their parsing or, at least, proof of optimality. This happens because there is not a general model of the parsing problem that includes all the dictionary based algorithms and because the existing optimal parsings work under too restrictive hypothesis. This work focus on the parsing problem and presents both a general model for dictionary based text compression called Dictionary-Symbolwise theory and a general parsing algorithm that is proved to be optimal under some realistic hypothesis. This algorithm is called Dictionary-Symbolwise Flexible Parsing and it covers almost all the cases of dictionary based text compression algorithms together with the large class of their variants where the text is decomposed in a sequence of symbols and dictionary phrases.In this work we further consider the case of a free mixture of a dictionary compressor and a symbolwise compressor. Our Dictionary-Symbolwise Flexible Parsing covers also this case. We have indeed an optimal parsing algorithm in the case of dictionary-symbolwise compression where the dictionary is prefix closed and the cost of encoding dictionary pointer is variable. The symbolwise compressor is any classical one that works in linear time, as many common variable-length encoders do. Our algorithm works under the assumption that a special graph that will be described in the following, is well defined. Even if this condition is not satisfied it is possible to use the same method to obtain almost optimal parses. In detail, when the dictionary is LZ78-like, we show how to implement our algorithm in linear time. When the dictionary is LZ77-like our algorithm can be implemented in time O(n log n). Both have O(n) space complexity. Even if the main aim of this work is of theoretical nature, some experimental results will be introduced to underline some practical effects of the parsing optimality in compression performance and some more detailed experiments are hosted in a devoted appendix
|
Page generated in 0.0443 seconds